Skip to content

NO FAKES Act Introduced to Protect Us from Deepfakes: What Do We Know So Far?

Seems like we won't hear Scarlett Johansson's voice anywhere but her socials and movies anymore.

Photo by Rosa Rafael / Unsplash

The NO FAKES Act is a long-awaited response to the unauthorised use of someone's likeness, such as voice or image in audio (hey, OpenAI with its Sky voice assistant!), photo, or video content, without the individual's consent. The Act was introduced at the end of July by a bipartisan group of senators—Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis.

What is The NO FAKES Act?

This bill follows an earlier draft presented in October and seeks to enhance legal protections against the unauthorised use of people's voices and images by AI. The NO FAKES Act is similar to the NO AI Fraud Act, proposed earlier this year, and it's goal is to strengthen the right of publicity, ensuring that this right remains with a person's heirs or estate executor even after their death.

The timing is quite fitting, considering recent cases like the controversy involving OpenAI's use of a voice resembling Scarlett Johansson's without her consent. The issue of deepfake pornography has affected both celebrities and students and has also played its part, prompting action from figures such as Alexandria Ocasio-Cortez, whose bill addressing this issue recently passed the Senate unanimously.

Read also: Taylor Swift AI Images Prompt US Bill to Tackle Deepfakes

AI's impact on the music industry has also been contentious, with AI-generated voice clones appearing in both authorised (like The Beatles' 'Now and Then') and unauthorised recordings. Notable cases of the latter include the notorious 'Fake Drake' anonymous TikToker who made a banger with the Weeknd's and Drake's voices, which was later removed from all platforms, and Drake employing Tupac's voice without permission in a diss track. The Tupac Shakur estate's cease and desist led to the track’s removal. Major record labels have also taken legal action against AI music generators for unauthorised use of their artists' songs.

Note that the Act is currently a proposed piece of legislation and is not yet valid law. Once enacted, it will apply uniformly across all states in the United States, creating a federal standard for these rights.

How will the NO FAKES Act work?

First and foremost, the Act proposes to create a federal property right for individuals over their voice and likeness. This right is not assignable during the individual's lifetime but can be licensed for a period of up to 10 years (or 5 years for minors). It expires 70 years after the individual's death, allowing for post-mortem transfer of rights.

A mechanism will be established for individuals to report unauthorised replicas, requiring platforms to take down such content promptly upon notice. Online service providers will not be held liable if they act quickly to remove unauthorised content.

However, the Act includes exceptions for certain types of content, such as news reports, documentaries, and works of parody or satire, which are protected under the First Amendment. This aims to balance the rights of individuals against the public interest in free expression.

What happens if the Act is violated?

Individuals or companies that create, publish, or distribute unauthorised digital replicas of a person's likeness or voice without consent can be held liable for damages. This includes not just the creators but also those who share or host such content. The Act proposes statutory damages starting at $5,000 for each unauthorised replica created or distributed.

How is the NO FAKES Act different from the ELVIS Act?

The ELVIS Act, or Ensuring Likeness Voice and Image Security Act, is a law passed in Tennessee in March 2024 that provides protections against the unauthorised use of an individual's voice, image, and likeness, particularly in the context of artificial intelligence and deepfakes. So... sounds pretty similar to the Act we're talking about here.

The goal of both Acts is to protect individuals' rights regarding their likenesses and voices, particularly in the context of artificial intelligence and deepfakes. But they're slightly different, though. Here's how:

  1. Scope of protection:

NO FAKES Act proposes a broad framework that establishes a federal property right over an individual's voice and likeness. It allows individuals to take action against unauthorised use regardless of whether their identity has commercial value. The protections extend for 70 years after death, akin to copyright duration, allowing for post-mortem rights transfer.

ELVIS Act specifically amends Tennessee's existing Personal Rights Protection Act to include voice as a protected characteristic. It primarily focuses on unauthorised uses in advertising and expands protections against all unauthorised uses of an individual's voice and likeness. The ELVIS Act allows for civil and criminal actions for violations but does not extend the same post-mortem rights as the NO FAKES Act.

  1. Legal framework:

NO FAKES Act introduces a new federal right that complements existing state-level publicity rights and is positioned as part of U.S. intellectual property law. It includes a notice and takedown process for unauthorised replicas and establishes statutory damages starting at $5,000 for each violation.

ELVIS Act enhances the existing state law by adding voice to the list of protected rights and includes specific civil liability for unauthorised use. It categorises violations into three distinct causes of action and imposes penalties for misuse, including criminal charges.

So, the new NO FAKES Act is kind of expanding the right of publicity which has been previously codified exclusively in state statutes and common law. 

According to Jennifer A. Kenedy and Jorden Rutledge from Locke Lord LLP, "The most notable difference between the proposed federal NO FAKES Act and current right of publicity jurisprudence is that the Act allows virtually anyone to bring a cause of action. In the conventional understanding of the right to publicity, individuals could take legal action only if their 'persona,' which refers to the commercially valuable public image of an individual, was utilized."

They also note that "monetary damages in traditional right of publicity cases are measured by the monetary loss to the plaintiff or gain to the defendant instead of a statutory 'per violation' measurement of damages. Prior common law only protected those who had identities with commercial value. Under the NO FAKES Act, there is no requirement the offender use the “commercial value” of the victim’s identity or that the victim’s “identity” or “persona” even have commercial value. It is simply unlawful for someone to produce a digital replica of an individual without their consent, regardless of the commercial value of the identity or the use."

Comments

Latest