Resources

Client Alerts, News Articles, Blog Posts, & Multimedia

Everything you need to know about BMD and the industry.

Protecting Your Image in the Age of AI-Generated “Deepfakes”

Client Alert

The rapid evolution of artificial intelligence (AI) has transformed how we create and consume digital content. While AI offers innovative solutions in business, entertainment, and communication, it also poses significant risks. Among the most troubling developments in AI is the proliferation of AI-generated fraudulent content, often called “deepfakes”.

A “deepfake” can be created by malicious actors who manipulate existing, legitimate images, videos, and audio recordings to create new fraudulent content. This fraudulent content is then used to deceive, defraud, and/or impersonate legitimate content from real individuals and brands. For example, our firm recently represented a business professional whose original video content was scraped from the internet, edited using AI, and re-uploaded to a platform where they did not have an account. The altered videos were then used to promote a fraudulent product falsely attributed to our client, leading to reputational harm and consumer confusion.

Currently, the capabilities of AI can be used to create fraudulent content such as:

1. “Deepfake” Videos that Misrepresent Endorsements or Beliefs

AI-generated “deepfake” videos can convincingly manipulate existing, legitimate video footage to make it appear as though a person in the existing footage is saying or doing something they never did in the original video. These fakes are now being used in:

  • fake endorsements where a person appears to promote a product or service they’re not associated with
  • manipulated interviews or speeches that falsely portray an individual as holding controversial or offensive opinions
  • fraudulent ads where an individual is inserted into a fraudulent video to lend credibility to a product/scam

The result is not only reputational harm to the original party but also the potential for legal liability if consumers act on these “deepfakes”.

2. AI “Voice Clones” Used in Fraud And Impersonation

AI voice synthesis tools can now clone a person’s speech patterns, tone, and inflection with remarkable and convincing accuracy. These voice clones are now being used to:

  • place scam calls in which the voice of a trusted colleague, family member, or executive is replicated
  • create fake voicemails or recordings such as fake customer service lines, political robocalls, or misleading audio snippets shared on social media
  • bypass security checks, especially those using voice authentication systems

Because voice is such a personal and persuasive medium, these scams can be particularly effective and often difficult to detect.

3. Repackaged or Stolen Content Misused on Digital Platforms

In many cases, bad actors scrape legitimate, existing content such as videos, podcasts, social media posts, or livestreams from the internet and re-upload them—often out of context—making it seem as though the speaker supports a particular viewpoint or product. The content can also be re-uploaded with an AI narration or branding, suggesting affiliation with companies or causes the original party does not endorse. This not only infringes intellectual property rights but also misleads audiences and can divert income from the rightful content creator.

How to detect AI “Deepfakes”

Despite rapid improvements in AI, many fraudulent AI video, audio, and other content may display subtle flaws such as:

  • Awkward or unnatural facial movements
  • lip-syncing issues (the words spoken do not match the way the person’s mouth is moving)
  • Flat, unnatural, or robotic speech patterns
  • Lighting or background inconsistencies
  • A lack of verification on official social media or websites from the person supposedly involved in the content

When in doubt, search for the original source and consult reputable news outlets and official pages.

What to do Next

If you discover that your image, voice, or content has been used without authorization, you may have both legal and practical remedies. First, report the content to the hosting platform. If your original content has been copied or altered, copyright law may provide grounds for removal. In addition, make sure to preserve the evidence—take screenshots, save links, and document any public confusion, customer complaints, or reputational fallout. Depending on your situation, you may have claims under defamation law, the right of publicity, consumer protection statutes, and/or tort law.

For more information, please contact Susan A. Jacobsen at 216.298.1452 x848 or sajacobsen@bmdllc.com.


Don't Get Caught Dazed and Confused: Another Florida Court Weighs in on Employer Obligations to Accommodate Medical Marijuana Use

A Florida trial court ruled in Giambrone v. Hillsborough County that employers may need to accommodate off-duty medical marijuana use under the Florida Civil Rights Act (FCRA). This contrasts with prior rulings and raises new compliance challenges for employers. With the case on appeal, now is the time to review workplace drug policies.

Corporate Transparency Act to be Re-evaluated

Recent federal rulings have impacted the enforceability of the Corporate Transparency Act (CTA), which took effect on January 1, 2024. While reporting requirements were briefly reinstated, FinCEN has now paused enforcement and is reevaluating the CTA. Businesses are no longer required to submit reports until further guidance is issued. For updates and legal counsel, contact BMD Member Blake Gerney.

Ohio Recovery Housing Operators Beware: House Bill 58 Seeks to Make Major Changes

Ohio House Bill 58 proposes significant changes to recovery housing oversight, granting ADAMH Boards authority to inspect and investigate recovery residences. The bill also introduces a Certificate of Need (CON) program, requiring state approval for major facility changes. OMHAS will assess applications based on cost, quality, accessibility, and financial feasibility. The bill also establishes a recovery housing residence fund to support inspections. For more information, contact BMD attorneys Daphne Kackloudis or Jordan Burdick.

January 2025 Notice of Proposed Rulemaking Brings Notable Changes to HIPAA Security Rule

In January 2025, the U.S. Department of Health and Human Services proposed amendments to the HIPAA Security Rule, aiming to enhance cybersecurity for covered entities (CEs) and business associates (BAs). Key changes include mandatory compliance audits, workforce training, vulnerability scans, and risk assessments. Comments on the proposed rule are due by March 7, 2025.

Corporate Transparency Act Effective Again

The federal judiciary has issued multiple rulings on the enforceability of the Corporate Transparency Act (CTA), which took effect on January 1, 2024. Previously, enforcement was halted nationwide due to litigation in Smith v. U.S. Department of the Treasury. However, on February 18th, the court lifted the stay, reinstating the CTA’s reporting requirements. Non-exempt entities now have until March 21, 2025, to comply. Businesses should act promptly to avoid civil penalties of $591 per day and potential criminal liability.