Deepfake Laws 2026: 11 Key Federal And State Rules
Discover key U.S. laws and proposals shielding citizens from misinformation, deepfakes, and unauthorized digital replicas in media and elections.

Laws Combating Fake News and Deepfakes
Advancements in artificial intelligence have enabled the creation of highly convincing false media, known as deepfakes, which pose risks to personal reputation, elections, and public trust. Legislators at federal and state levels are responding with targeted laws to regulate these technologies while balancing free speech protections.
Understanding Deepfakes and Their Societal Impact
Deepfakes use AI to alter images, videos, or audio, making individuals appear to say or do things they never did. This technology fuels misinformation, scams, and reputational harm. For instance, bad actors exploit voice manipulation to access financial accounts, prompting calls for regulatory action. In elections, deepfakes can sway voters by fabricating candidate statements, undermining democracy. Businesses face liabilities for distributing such content unknowingly, necessitating robust compliance strategies.
The rise of digital replicas—AI-generated versions of a person’s likeness or voice—amplifies these concerns. Without legal boundaries, unauthorized replicas could proliferate in advertising, entertainment, and fraud, eroding individual control over personal identity.
Federal Initiatives Targeting Digital Misinformation
Congress is advancing bills to establish nationwide standards. The NO FAKES Act of 2025 (S.1367 and H.R.2794) creates a federal right of publicity for digital replicas, allowing individuals to sue for unauthorized use of their voice or likeness. It imposes liability on creators, distributors, and online services, with penalties up to $750,000 per violation for non-compliant platforms.
Key provisions include:
- A notice-and-takedown system similar to DMCA, shielding compliant platforms from liability if they remove infringing content promptly.
- Exemptions for news, documentaries, satire, and First Amendment-protected commentary, ensuring no broad censorship.
- Preemption of conflicting state laws, except those on elections or explicit content, to standardize protections.
Another proposal, the Preventing Deep Fake Scams Act (H.R.1734), addresses financial fraud by mandating a task force to combat voice and audio deepfakes targeting consumers. The TAKE IT DOWN Act, effective May 2026, criminalizes sharing intimate deepfakes or digital forgeries, adding criminal teeth to civil remedies.
State-Level Responses to AI-Generated Deception
States are filling gaps with tailored statutes. California’s AB 2655 and AB 2839 target election deepfakes, requiring platforms to remove or label deceptive content near voting periods. Though struck down initially for conflicting with Section 230, the state is appealing to reinstate these measures.
New York updated its laws effective 2025-2026: contracts cannot substitute digital replicas without specific terms and representation, and ads with synthetic performers demand disclosures, with fines up to $5,000. Tennessee’s ELVIS Act protects performers’ voices and likenesses postmortem, extending to cloning tools.
Eleven states now restrict AI deepfakes in political ads, varying in scope:
| State | Key Requirement | Applies To |
|---|---|---|
| California | Remove/label deceptive content | Platforms, 120 days pre-election |
| Texas | Labeling for creators | Political ads only |
| Wisconsin | Disclosure mandates | Ad producers |
| Minnesota | Broad distribution liability | Deepfakes generally |
These laws often exempt media if unaware of fakes but raise compliance burdens for broadcasters under equal time rules.
Right of Publicity: Protecting Personal Likeness
Traditional right of publicity laws prevent commercial misuse of one’s image or voice. Deepfakes extend this to digital realms. The NO FAKES Act modernizes it federally, covering postmortem rights and online distribution. States like New York void vague contract clauses allowing replicas without safeguards.
Challenges include defining “unauthorized” use amid AI tools’ accessibility. Courts must distinguish harmful fakes from protected parody, as seen in ongoing Section 230 disputes.
Obligations for Platforms and Businesses
Online services—search engines, apps, cloud providers—must implement takedown processes under NO FAKES, facing steep fines otherwise. Good faith efforts grant immunity, but ambiguities persist on monitoring duties.
Businesses should:
- Develop AI policies for synthetic media handling.
- Train teams on deepfake detection.
- Audit vendor contracts for replica clauses.
- Monitor state-specific disclosure rules.
Broadcasters risk liability for airing unlabeled political deepfakes, potentially clashing with FCC no-censorship mandates.
Balancing Regulation with Free Speech
Laws carve out exceptions for journalism, criticism, and satire to uphold the First Amendment. NO FAKES avoids election-specific rules, reducing overreach challenges unlike California’s AB 2655. Still, critics argue platforms’ immunities under Section 230 shield bad actors.
Enforcement relies on user reports, avoiding proactive censorship. Penalties deter malice without stifling innovation.
Practical Strategies for Compliance in 2026
As 2026 unfolds, with new laws activating, entities must adapt. Implement watermarking for AI content, partner with detection tools, and document takedown responses. For elections, platforms should prepare reporting mechanisms. Individuals can register agents with the Copyright Office for faster remedies.
Global alignment lags, but U.S. trends influence international standards, emphasizing disclosure in AI media.
Frequently Asked Questions
What is the NO FAKES Act?
The NO FAKES Act establishes federal protections against unauthorized digital replicas of a person’s likeness or voice, with takedown safe harbors for platforms.
Do state deepfake laws apply to broadcasters?
Some do, requiring labels or risking liability unless unaware, but FCC rules limit refusals of candidate ads.
How do deepfakes impact elections?
They create deceptive videos of candidates, prompting laws like California’s AB 2655 for removal within election windows.
Are there penalties for non-compliant platforms?
Yes, up to $750,000 per replica under NO FAKES, reduced for good faith takedowns.
What defenses exist against deepfake claims?
News, parody, documentaries, and relevant ads are exempt if not misleading.
References
- Proposed Legislation Reflects Growing Concern Over “Deep Fakes” — O’Melveny & Myers LLP. 2025-approx. https://www.omm.com/insights/alerts-publications/proposed-legislation-reflects-growing-concern-over-deep-fakes-what-companies-need-to-know/
- Countdown to Data Privacy Day 2026: Deepfakes, Digital Replicas — JD Supra. 2026-approx. https://www.jdsupra.com/legalnews/countdown-to-data-privacy-day-2026-2331090/
- California To Court: Reinstate ‘Deep Fake’ Political Ad Ban — MediaPost. 2026-01-14. https://www.mediapost.com/publications/article/411995/california-asks-court-to-reinstate-ban-on-deep-fa.html
- 11 States Now Have Laws Limiting Artificial Intelligence Deep Fakes — Broadcast Law Blog. 2024-04-approx. https://www.broadcastlawblog.com/2024/04/articles/11-states-now-have-laws-limiting-artificial-intelligence-deep-fakes-and-synthetic-media-in-political-advertising-looking-at-the-issues/
- Text – H.R.1734 – Preventing Deep Fake Scams Act — Congress.gov. 2025-2026. https://www.congress.gov/bill/119th-congress/house-bill/1734/text
- S.1367 – NO FAKES Act of 2025 — Congress.gov. 2025-2026. https://www.congress.gov/bill/119th-congress/senate-bill/1367
Read full bio of Sneha Tete








