Look What You Made Me Do: Taylor Swift’s Trademark Strategy Takes Aim at AI

On April 24, Taylor Swift’s company, TAS Rights Management, filed three new trademark applications with the US Patent and Trademark Office (USPTO) in what appears to be an effort to safeguard her identity against the rising threat of artificial intelligence (AI)-generated content. Two of the applications fall into the relatively uncommon category of “sound marks.”

On

One for the phrase “Hey, it’s Taylor Swift” and the other for “Hey, it’s Taylor,” each tied to a recording of Swift’s voice. The third application covers a visual depiction of Swift, specifically, an image of her in one of her signature performance outfits from the recent Eras Tour. Swift’s move follows actor Matthew McConaughey’s 2025 efforts, where he obtained eight trademark registrations with the USPTO, including a sound mark on his well-known catchphrase “Alright, alright, alright!”

Advances in AI have dramatically lowered the barriers to creating convincing replicas of a performer’s voice or appearance, posing serious challenges for the entertainment industry. With widely available AI tools, bad actors can generate synthetic audio or video that closely mimics a celebrity and distributes it broadly, potentially inflicting meaningful reputational and economic damage. These filings may signal a broader trend: public figures turning to trademark law as an additional line of defense against AI-driven identity misuse. 

The Strategy: Trademark Law as an Additional Layer of Protection

Until now, artists have principally depended on two legal frameworks to defend against the unauthorized use of their voice and likeness: copyright, which covers original creative works, and state right of publicity statutes, which guard against the unauthorized commercial exploitation of an individual’s identifiable personal characteristics. Both offer meaningful protections, but each framework has practical limitations in the AI context that have prompted artists to explore additional avenues.

Copyright protects specific works fixed in a tangible medium — recordings, compositions, and performances. But generative AI can produce an entirely new audio file that merely sounds like a public figure without copying any particular recording. If nothing is actually reproduced, or at least no provable substantial similarity exists at the output level, the copyright case becomes much harder. Meanwhile right of publicity claims, while well-suited to unauthorized uses of a celebrity’s identity, are governed by a patchwork of state laws with varying standards and remedies, making nationwide enforcement cumbersome. Some states expressly protect an individual’s voice from unauthorized commercial exploitation through statute or case law, while others have only recently amended their laws to address AI-generated uses of voice, and still others lack clear authority confirming that voice falls within the scope of protected identity attributes at all. This jurisdictional uncertainty complicates reliance on right of publicity as a standalone enforcement tool against AI-generated voice content.

Trademark law may help fill these gaps, as it offers certain structural advantages over both copyright and right of publicity frameworks. First, a federal trademark registration creates a nationwide presumption of ownership and validity, giving the registrant a uniform enforcement tool that does not depend on the strength or existence of any particular state’s publicity rights law. Second, unlike copyright, trademark law is not concerned with whether the defendant copied a specific protected work. The inquiry focuses instead on whether the defendant’s use is likely to cause consumer confusion as to source, sponsorship, or affiliation. This distinction is particularly significant in the AI context, where a generated voice clip may closely evoke a celebrity without reproducing any actual recording. Relatedly, trademark law’s “likelihood of confusion” standard can reach uses that are not identical to the registered mark but are still recognizable as evoking the mark holder’s brand.

That said, trademark protection introduces its own hurdles: to establish rights, a party must demonstrate that the mark is used commercially as a source identifier tied to goods or services. For voice marks, this requirement can be demanding, as celebrities will need to show that their vocal phrases function like traditional brand indicators — not merely as evidence of identity, but as signals of origin in the marketplace. Further, while the likelihood of confusion standard creates certain advantages relative to copyright law, it is in some ways a more demanding standard than right of publicity claims, which require that a defendant commercially exploited the plaintiff’s identity without authorization but imposes no confusion element. Many uses of AI-generated celebrity voices or likenesses — particularly those that are non-commercial, parodic, satirical, or otherwise clearly expressive — may be unlikely to create genuine consumer confusion about whether the celebrity produced or endorsed the content. 

Notwithstanding these limitations, the trademark strategy remains a meaningful addition to the enforcement toolkit — particularly where copyright and right of publicity leave gaps that are unlikely to be closed absent federal legislation. By pursuing federal trademark registrations, public figures like Swift can position themselves to acquire a nationwide enforcement mechanism that does not depend on proving copying or navigating a patchwork of state laws, even if the source identification and likelihood of confusion requirements narrow the universe of actionable uses. 

The same logic extends to visual identity: AI image generators can produce recognizable likenesses without copying any specific photograph, creating copyright and publicity rights gaps parallel to those that exist for voice. By seeking to register a distinctive visual combination — outfit, instrument, and pose — alongside her sound marks, Swift’s team is pursuing a comprehensive, multi-modal strategy to anchor identity protection in federal trademark law.

Key Takeaways for Clients

While the so-called “trademark yourself” strategy has attracted significant attention, it has not yet been put to a definitive judicial test in the context of generative AI. In principle, however, a celebrity armed with these registrations could pursue takedown demands against AI platforms, much as entertainment studios already do when enforcing their copyrights. Although it will take litigated disputes before we know how exactly far trademark law can be stretched to address AI-generated identity misuse, a layered IP strategy — combining copyright, right of publicity, and trademark — may offer the most practical defense in the absence of comprehensive federal AI legislation. 

Parties in entertainment, media, and talent management should consider auditing distinctive identity elements — signature catchphrases, vocal greetings, and recognizable visual presentations — to assess what might plausibly function as a source identifier and qualify for trademark protection. Because the use-in-commerce requirement demands evidence that these elements function as brand indicators, public figures and their representatives should consider how they can consistently deploy them in connection with goods or services, building an evidentiary record to support future applications. 

Simultaneously, companies developing or deploying generative AI tools should also take note: as more public figures pursue this strategy, the universe of potential trademark claims against AI platforms is likely to expand, and proactive content moderation policies may become increasingly important as a risk management measure.

We will continue to monitor developments in this area and will provide further updates as the legal landscape evolves.

Contacts

Continue Reading