The Business of AI Avatars: Key Legal Risks and Best Practices
The use of artificial intelligence (AI) to generate digital representations of real or fictional people — “AI avatars” — offers new ways to build brands, drive engagement, and grow revenue by accelerating content creation and lowering production costs. It also raises the following important legal considerations.
Contracts and Consents: When using a real individual’s likeness to create a digital replica, consult applicable state laws, obtain all required written consents, and ensure the agreed-upon scope aligns with the actual and intended use.
Vendor and Social Media Platform Due Diligence: Conduct due diligence when selecting an AI vendor on training data and model provenance. For example, was the model trained on copyrighted works or real individuals’ likenesses? Review Terms of Use for provisions on indemnity, output ownership, and commercial use limitations.
Vet Output: Analyze output for potential copyright, trademark, or right of publicity issues.
Implement Safeguards and Acceptable Use Policies: Use prompt control safeguards and establish internal AI use policies to mitigate risks associated with AI avatars’ output, as well as safeguards on AI avatars’ personality profiles and interactions with users.
Make All Required Disclosures: Assess whether truth‑in‑advertising rules, state laws, or social media platform terms require disclosure based on the specific use of AI avatars, particularly in advertising or direct consumer interactions. Even when not required, disclosures can build consumer trust. Additionally, avoid testimonials made by AI influencers.
Protect AI Avatars to the Extent Possible: Know the limits of intellectual property (IP) protection for AI-generated output, and prioritize protection of human-authored elements, names, and distinctive visual trade dress where available. Additionally, ensure contracts and platform terms do not restrict commercial use of or impose ownership claims in the outputs.
Background
AI avatars are AI-generated personas that mimic human appearance, voice, or behavior. Whether entirely synthetic or modeled on real people, AI avatars are increasingly more lifelike and are being used across more industries. In fashion and retail, businesses are testing the use of the AI-generated models in advertisements and catalogs. Similarly, AI-generated influencers are attracting a large following on social media, wherein they sell products and services.
Contract-Related Implications
Several states require express written agreement for the use of digital AI avatars based on the likeness of real individuals, including entertainment hubs New York and California.
It is important to consider whether state specific laws apply to your proposed use. For example:
In New York - the New York State Fashion Workers Act requires companies to obtain clear, conspicuous prior written consent for the creation or use of a model’s digital AI avatar. Further, the law prohibits “model management companies,” as defined by the law, from mandating that models grant the companies power of authority. Thus, companies are essentially unable to license the use of a model’s likeness without the model’s express authorization.
In California - a recent Labor Code amendment renders contract terms unenforceable if (1) the contract permits the creation or use of a digital replica of the person’s voice or likeness to perform work they would otherwise do in person, or it allows use of their voice or likeness to train generative AI; (2) the contract does not clearly define and detail all proposed uses of the digital replica or the AI system; and (3) the individual lacked qualified representation.
Even with express permission to use a person’s likeness or voice, companies must stay strictly within the scope of rights granted and any use restrictions agreed in writing. In Lehrman v. Lovo, Inc., No. 1:24-cv-03770 (S.D.N.Y. 2024), professional voice actors allege they provided recordings on Fiverr based on assurances of “internal,” “academic,” or “test script” use only; they claim the files were instead used to train Lovo’s text‑to‑speech model and to market “voice clones” sold to customers under pseudonyms, exceeding any license and without additional consent or compensation. The court allowed key claims to proceed, including breach of contract, New York right of publicity, and New York consumer protection claims, emphasizing that actual use must align with the rights purchased and any negotiated limits (e.g., non‑broadcast or non‑commercial uses).
Intellectual Property
AI avatars that visually resemble protected works or personas can trigger multiple IP claims. On the copyright side, risks arise where an avatar’s visual design or outputs reproduce protectable expression from existing works, including distinctive character designs, costumes, or stylized features, potentially supporting claims for reproduction or derivative works. Additional pressure points include allegations that models “memorize” and regurgitate training imagery, as well as uncertainty over copyrightability where human authorship in the avatar’s visual design or curation is minimal. The legal treatment of training data remains unsettled, but output that is substantially similar to a protected visual character or artwork may still draw infringement claims. In audiovisual contexts, avatars may also implicate musical works and sound recordings. Outputs that incorporate or closely replicate protected compositions or the fixed sounds from a recording can prompt separate claims tied to the composition and the recording.
Visually similar avatars can also give rise to trademark, trade dress, and false endorsement claims when the look and feel evokes a brand mascot, product get-up, or a celebrity’s recognizable appearance, potentially causing confusion about source, sponsorship, or affiliation. Highly distinctive visual features may also prompt dilution allegations.
Right of Publicity
Similarly, AI tools can generate avatars that resemble the likenesses of real individuals used to train the models. Businesses should not only secure explicit permissions for any intended use of a real person’s likeness but should also be mindful that inadvertent resemblance to a real individual’s likeness, image, voice, or other identifiable indicia can trigger right of publicity claims, especially in commercial contexts. There may also be public or cultural backlash for the use of an AI avatar that inadvertently resembles a contentious figure. For example, online shopping platform Shein recently made headlines for displaying an AI-generated model that closely resembled a well-known criminal defendant, Luigi Mangione. Shein ultimately removed the images but not before opening itself up to legal liability and public backlash.
While the availability of monetary damages and injunctive relief varies by state and the nature of the use, courts routinely award both forms of relief. As a result, a company found to have violated a plaintiff’s right of publicity may face substantial monetary liability and be required to undertake time‑consuming and costly corrective measures. Some states have expanded liability. Tennessee’s ELVIS Act, for example, creates secondary liability for knowingly making an individual’s voice or likeness publicly available without authorization or distributing technology primarily designed to produce a particular identifiable person’s image, voice, or likeness without consent.
Deepfake Laws
The use of AI avatars can also implicate laws aimed at addressing “deepfakes,” AI-generated synthetic media that may depict either real or fictional individuals. Due to concerns relating to misinformation and privacy, several states have enacted deepfake legislation, most of which governs the creation and dissemination of sexually explicit deepfakes or deepfakes in the context of public elections. Parties using AI avatars should ensure that their use does not implicate or run afoul of these laws.
At the federal level, Congress is considering the NO FAKES Act, which would create a new federal right protecting against unauthorized “digital replicas” in images, sound recordings, and audiovisual works. Individuals could license, but not assign, these rights. The bill includes a private right of action and a notice‑and‑takedown safe harbor limiting secondary liability. If enacted, the NO FAKES Act would establish a uniform federal regime and preempt many state laws while preserving certain claims related to sexually explicit or election‑related deepfakes.
Advertising and Disclaimers
Using AI avatars can trigger disclosure obligations under federal and state false advertising and unfair competition laws, as well as emerging AI-specific laws. Disclosures may be required if omitting that the avatar is AI-generated, that interactions or endorsements are simulated or automated, or a connection between the avatar and the brand being promoted is likely to mislead a reasonable consumer.
Notably, businesses cannot circumvent the Federal Trade Commission’s (FTC) disclosure requirements for endorsements by using AI influencers, as opposed to real individuals, to promote the brand or its products. The FTC’s Endorsement Guides apply regardless of whether the “endorser” is a human or an AI persona. If an AI influencer communicates an endorsement, it is subject to the same rules as human endorsers. The endorsement must be truthful, non‑misleading, and properly substantiated, and there must be a “clear and conspicuous” disclosure of any “material connection” between the influencer and the brand.
Further, the Endorsement Guides require that endorsements reflect “the honest opinions, findings, beliefs, or experience of the endorser.” Because a synthetic persona cannot have actual experience or hold genuine opinions, this significantly narrows what an AI influencer may lawfully say in an endorsement compared to a human. Brands should script AI‑delivered endorsements to avoid implying personal use, independent judgment, or first‑person views. For now, it is clear that particular first‑person claims by a fictitious AI persona (for example, “I tried this product; it’s great!”) are out of bounds. As AI influencers proliferate, further guidance and enforcement decisions may clarify how the FTC will treat more nuanced cases.
Some states have enacted laws that mandate disclosure regarding the use of AI in specific contexts. For example, Maine’s Act to Ensure Transparency in Consumer Transactions Involving Artificial Intelligence requires businesses transacting with Maine consumers to clarify when the consumer is communicating with a machine rather than a human. Specifically, the law makes it unlawful for a business to use an AI chatbot or other computer technology in a commercial transaction or trade practice that could mislead a reasonable consumer to believe they are interacting with a human being. When that possibility exists, the business must disclose “in a clear and conspicuous manner that the consumer is not engaging with a human being.” Because that standard turns on what a “reasonable consumer” would perceive, companies must carefully assess whether a particular interaction triggers the disclosure obligation. Violations carry civil penalties of up to $10,000 per violation, which could expose businesses to hefty fines given the potentially high volume of consumer interactions. Other states have adopted similar rules, including California’s BOT Act, which places disclosure requirements on the use of bots (AI or otherwise) under certain conditions.
Separately, some social media platforms may require labels for realistic synthetic media that incorporate AI avatars.
Responsibility for Statements of AI Avatars
Businesses should also assess the possible risks for the use of AI avatars without sufficient controls on the avatar’s “personality” and behavioral profile. Recent cases demonstrate that companies may be held responsible for statements or behaviors by an avatar the business creates or controls. One recent example involves an AI chatbot platform that enables users to engage with characters based on user-created and pre-existing personas. Family members of minors who died by suicide after interacting with the chatbots have sued the platform, asserting wrongful death and survivorship, negligence, and deceptive and unfair trade practices claims, among others. The complaints generally allege that the company’s AI chatbots engaged in prolonged emotionally or sexually abusive interactions. In early motion practice, a district court rejected the argument that the platform’s outputs are categorically outside “speech” entitled to First Amendment protection, signaling scrutiny over the content and context of AI-generated communications rather than excluding them from speech protections.
Companies may also encounter reputational harm if an AI avatar makes harmful, offensive, or misleading statements to consumers. For example, AI-enabled toys have prompted public warnings to parents based on concerns that the toys may expose children to inappropriate or unsafe information, adversely affect well-being, or create privacy risks — all to the reputational detriment of the manufacturers.
Given these legal and reputational risks, companies deploying consumer-facing AI avatars should implement clear and consistent guardrails on the avatars’ profiles and outputs, as well as factor in potential use by minors. While specific safeguards will vary by use case, important considerations include the avatar’s role, personality profile, audience, and level of autonomy.
Protecting AI Avatars
Though wading into these relatively untested waters presents legal and public relations risks, as articulated above, AI avatars can also be leveraged to generate revenue. For example, digital avatar influencer “Lil Miquela” has 2.3 million followers on Instagram, and AI-powered singer Xania Monet reportedly garnered 17 million streams in two months leading to a $3 million record deal. However, the law does not yet clearly address protection for works created by AI-generated avatars or in the avatars themselves.
Courts and the US Copyright Office have generally concluded that works generated by artificial intelligence without meaningful human contribution are ineligible for copyright protection due to the absence of human authorship. In the music context, for example, human-authored lyrics typically qualify as protectable literary works. By contrast, an AI-generated “likeness” or persona associated with those lyrics, such as the avatar Xania Monet, may fall outside the scope of copyright protection if it lacks sufficient human authorship.
In addition, widely available open-source, nonexclusive AI models may produce similar or even identical outputs, increasing the likelihood that multiple users will generate similar or even identical AI avatars. Such convergence may affect distinctiveness, lead to consumer confusion, and complicate brand positioning. A marketplace saturated with lookalike AI avatars may increase the risk of inter-brand disputes and create reputational exposure, particularly where audiences mistakenly attribute one party’s content or conduct to another.
Additionally, businesses should vet AI Terms of Use. Some AI providers’ Terms of Use place restrictions on the commercial use of and impose ownership claims in the output. Businesses should therefore confirm that commercial use is permitted and that, to the extent allowed by law, output ownership rests with the user rather than the service provider. The presence of IP indemnity provision in Terms of Use may also help protect businesses from litigation, given the above-referenced, unsettled legal debate regarding third-party material to train AI tools.
Key Takeaway
While AI avatars can meaningfully increase visibility and scale for businesses, successful use requires considering the numerous legal risks and unknowns, and implementing best practices, especially amidst an evolving legal landscape.
The issues above are only a subset of the legal questions implicated by the commercial use of AI avatars. For additional guidance, please reach out to your AFS contact or one of the authors.
Contacts
- Related Industries