Tennessee’s ELVIS Act Leaves AI Developers All Shook Up Over Voice and Likeness Liability
A new Tennessee statute is testing how far identity can be protected in law. The Ensuring Likeness, Voice and Image Security (ELVIS) Act brings AI-generated voices and likenesses under explicit legal control, marking the first U.S. law to define digital imitation as a potential rights violation.
The Legal Architecture of Digital Identity
For counsel, this statute is less about celebrity image and more about legal infrastructure. The ELVIS Act modifies Tennessee’s long-standing Personal Rights Protection Act, effective July 1, 2024, to recognize voice as a legally protected attribute alongside name and likeness. It also creates liability for companies that build or distribute tools used to generate unauthorized synthetic likenesses. Legal analysts have called it a first-of-its-kind statute with direct consequences for AI developers, media organizations, and the lawyers who advise them.
The law’s acronym is deliberate. ELVIS stands for Ensuring Likeness, Voice and Image Security, a reminder that Tennessee’s cultural heritage intersects with a growing field of AI regulation. Yet its implications reach far beyond entertainment law. It raises questions about jurisdiction, due process, and professional responsibility that every firm handling AI-assisted content must confront.
Counsel’s New Compliance Burden
The ELVIS Act makes explicit what was previously implied: that a person’s voice, even in synthetic form, can trigger legal rights. This means contracts, licensing agreements, and discovery protocols all need to evolve. In-house counsel must confirm that no AI system used by their organizations creates or distributes content that imitates identifiable individuals without consent. Law firms must update client intake questionnaires and compliance reviews to identify whether clients use synthetic media and under what terms.
For litigators, the statute introduces new potential claims under state law. Unauthorized AI-generated likenesses can give rise to civil actions for commercial exploitation, emotional distress, or consumer deception. Criminal liability is also possible. Observers note that willful violations may be prosecuted as a Class A misdemeanor when a cloned likeness is used for commercial gain. These provisions create a new class of hybrid tort and technology disputes that will require coordinated expertise across IP, privacy, and AI governance practices.
How the Law Actually Works
The law defines “voice” as a sound that is “readily identifiable and attributable to a particular individual,” whether real or simulated. It grants individuals and their estates the right to control how that voice or likeness is used in commerce. The statute closes a gap between traditional right-of-publicity laws and modern AI capabilities, which can replicate tone, pitch, and phrasing from minimal data.
The Act’s reach is broad. It holds accountable not only the end user who publishes an imitation, but also the developer or distributor of a tool whose “primary purpose” is to produce such content. This clause effectively introduces a duty of care for AI vendors, placing them within the same risk category as content publishers when their products enable unauthorized likeness generation.
The Act also recognizes exceptions for news reporting, public affairs, political commentary, and parody. These carve-outs mirror long-standing First Amendment defenses while narrowing the commercial loopholes that previously allowed cloned likenesses to circulate without consent.
Jurisdictional Questions and Practical Challenges
One critical unresolved issue involves interstate enforcement. When a Tennessee resident’s voice is cloned by an entity operating in California or New York, questions of personal jurisdiction and choice of law become central to any litigation strategy. Companies with national footprints must assess whether Tennessee’s long-arm statute reaches their conduct, and whether other states will recognize Tennessee judgments for violations occurring across state lines.
The burden of proof presents another layer of complexity. Plaintiffs must demonstrate that a synthetic voice or likeness is identifiably theirs, a showing that may require expert testimony in audio forensics, machine learning provenance, or consumer perception studies. Defendants, meanwhile, must prove consent was obtained or that their use falls within a statutory exception. Discovery in these cases will likely extend to training data documentation, model architecture, and licensing agreements that predate the Act’s effective date.
Remedies under the statute include injunctive relief, actual damages, and profits attributable to the violation. In cases of willful infringement, courts may award enhanced damages. Estate rights persist for a substantial period after death, extending the Act’s protections across generations. These provisions create significant financial exposure for companies that fail to implement adequate consent and attribution protocols.
Risk Exposure and Compliance Demands
For the legal profession, the ELVIS Act signals a coming wave of statutory updates that will redefine how firms advise on AI. Privacy lawyers will need to treat voice and likeness data as personally identifiable information. IP lawyers will face disputes over whether synthetic likenesses constitute derivative works or unauthorized reproductions. Media lawyers will be asked to reconcile free expression rights with digital identity ownership.
The statute also creates new compliance burdens for technology clients. AI developers must document training datasets and certify that no identifiable voice or likeness was used without consent. Companies using generative tools for marketing, customer service, or creative work must ensure they are not inadvertently producing likeness-based content that could expose them to claims under Tennessee law. Tennessee’s position as a cultural hub made it an early adopter, but the precedent will influence policy across other states.
Comparison to Existing Frameworks
Tennessee’s approach builds on California’s well-established publicity rights framework while extending protection to synthetic representations. California Civil Code Section 3344 has long protected name, voice, signature, photograph, and likeness, but courts have struggled to apply these provisions to AI-generated content created without direct recording or capture of the individual. Tennessee’s explicit inclusion of simulated voices removes this ambiguity, providing clearer grounds for enforcement.
Unlike California’s statute, which requires proof of knowing use, Tennessee’s Act reaches tool developers whose products enable unauthorized imitation, even if the developer does not directly publish the infringing content. This expanded liability model reflects a legislative judgment that technology intermediaries bear responsibility for foreseeable harms arising from their platforms.
The Broader Policy Shift
Federal law does not yet provide equivalent protection. Congress has held hearings on AI impersonation and deceptive media, but there is no national right to control one’s voice or digital likeness. The Federal Trade Commission has begun examining AI-related deception, but its authority is limited to consumer protection, not personality rights. The ELVIS Act therefore operates as a model statute, representing the first legislative acknowledgment that identity itself is a form of intellectual property.
For attorneys drafting contracts or policies, the Act provides a reference point for defining consent, attribution, and liability in the use of AI-generated likenesses. Future state or federal bills may adopt its structure, using Tennessee’s approach as a blueprint for balancing privacy, creativity, and technological innovation. This creates a short-term challenge for firms that operate nationally, as a patchwork of state-specific rules will likely emerge before a federal framework takes shape.
Practical Guidance for Counsel
Lawyers advising clients in media, entertainment, or AI development should start with three steps. First, review existing contracts for likeness and publicity clauses to confirm they extend to synthetic or AI-generated representations. Second, audit the use of voice and image datasets in product development and marketing. Third, implement clear disclosure and consent policies when AI-generated likenesses appear in commercial content. These measures reduce exposure while preparing for similar legislation in other jurisdictions.
In litigation, counsel will need to determine what evidence proves a voice or image is identifiable to a specific person. Expert testimony in sound analysis, training data provenance, or audience perception could become standard. Discovery may extend to model training logs or vendor documentation. These cases will test the capacity of courts to apply analog doctrines of identity and authorship to digital environments.
What Comes Next
The Act’s real test will come through enforcement. Early cases will determine how courts interpret “primary purpose” when assessing tool liability, how broadly they construe the news and commentary exceptions, and whether Tennessee’s statute survives dormant Commerce Clause challenges from out-of-state defendants. Plaintiff’s firms are likely to file test cases against high-profile AI platforms, seeking both injunctive relief and statutory damages.
As other states consider similar legislation, Tennessee’s framework will serve as both template and cautionary tale. Legislatures will watch how courts balance technological innovation against individual rights, and whether the statute’s criminal provisions prove enforceable or merely symbolic. For now, the Act stands as the most comprehensive state-level response to AI-driven identity appropriation.
A Cultural and Legal Signal
The Act’s title acknowledges Tennessee’s most famous performer, but its effect is structural, not symbolic. As of October 2025, no reported enforcement actions have yet tested the ELVIS Act, leaving open questions about how courts will define “primary purpose,” “readily identifiable voice,” or the scope of its First Amendment exceptions. The first case to reach a Tennessee court will determine whether the statute operates as a deterrent or a declaration. For now, the Act stands as an untested but influential model.
At the federal level, there is no equivalent framework. The Federal Trade Commission has begun examining AI impersonation under its deceptive-practices authority, and Congress continues to debate national standards, but no federal right of publicity yet exists. The ELVIS Act therefore fills a regulatory vacuum, and one that may eventually collide with federal copyright pre-emption or Section 230 defenses when platforms and tool developers are sued under its provisions.
For lawyers, the statute’s real significance lies in its precedent. It redefines identity as a protected legal asset and places consent, authorship, and accountability at the core of AI compliance. Whether Tennessee’s experiment becomes a blueprint or a cautionary tale will depend on how quickly enforcement follows, and how other states, and eventually Congress, choose to harmonize the law.
Sources
- ArentFox Schiff: Elvis Alive: Tennessee First to Implement Rights of Publicity Protections Against AI
- Armstrong Teasdale: Artificial Intelligence and Copyrights – Tennessee’s Elvis Act Becomes Law
- Benesch Law: Tennessee Governor Signs First-of-its-Kind Bill Addressing AI Misappropriation of Voices, Images, and Songs
- Davis Wright Tremaine: Tennessee’s Elvis Act and AI Voice Replicas
- Holland & Knight: First-of-its-Kind AI Law Addresses Deepfakes and Voice Clones
- Saul Ewing: Elvis Act – Tennessee Law Addresses AI’s Impact on Music Industry
- Vanderbilt Law School: Why Tennessee’s Elvis Act Is the King of Artificial Intelligence Protections
- Wikipedia: ELVIS Act
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.
See also: Truth on Trial: Courts Scramble to Authenticate Evidence in the Age of Deepfakes
