Skip to content

My Voice Was Cloned: Seeking Justice in the AI Era

Imagine hearing your voice in a commercial you never recorded saying words you’ve never said. As the line between innovation and infringement blurs, artists wonder how to protect their voices and when the law can catch up.

Photo by Chad Stembridge / Unsplash

Unauthorized voice cloning, an AI-powered Pandora's box that's causing more chaos than an overly energetic air guitarist at a karaoke bar. Amongst other things, it was quickly hijacked by scammers and turned into a nightmare for artists and the entertainment industry.

According to Michael Hasse, a Cybersecurity and Technology Consultant, voice cloning is pretty simple: "Individuals have a relatively limited range of pitches in normal speech, and their use of language will have a typical rhythm. Before LLMs, it took a lot of samples to get a good reproduction, but with modern systems, it can be done with a single sample of a few seconds as the LLM can match that against the known "library" and fill in the blanks quite accurately."

In other words, we're all just a bunch of predictable meat sacks, and AI has figured out how to mimic us with terrifying accuracy.

Sure, voice cloning has been used for some good causes, like helping introverts start podcasts or assisting those who have lost their voices for various reasons. But the dark side of this technology is starting to rear its ugly head.

Voice actors and musicians are finding themselves in the crosshairs of unauthorized cloning, with their voices being used in all sorts of unsavory contexts. Bev Standing, a voice actress, sued TikTok for allegedly using her voice without consent or compensation. Apparently, her sweet tones were used in videos featuring "foul and offensive language," which is a bit like finding out your angelic voice was used in a Quentin Tarantino film.

Scarlett Johansson accused OpenAI of using her voice for “Sky.” (This was not the first time she went after tech bros for using her voice.)

And it’s not just living artists feeling the pinch. The estate of George Carlin, the legendary comedian, sued Dudesy Media Company for producing an AI-generated comedy special mimicking Carlin’s voice and style. The unauthorized special, titled "George Carlin: I’m Glad I’m Dead," racked up nearly 500,000 views on YouTube, proving once again that the internet loves a good dumpster fire.

Even in the world of video games, AI voice cloning is causing a stir. Skyrim voice actors found their performances replicated in mods created for less-than-savory purposes. It’s one thing to slay dragons, but dealing with AI clones in dubious fan mods? That’s a quest no one signed up for.

As the lines between innovation and infringement become blurrier than a Monet painting, artists wonder how to protect their voices and who should be asked for help. In this article, we attempt to figure that out.

Content platforms: enablers or enforcers?

Entertainment industry leaders like YouTube, Spotify, and TikTok try to play sheriff to address the problem (as they should). Still, they often end up as bumbling deputies, with their policies being as effective as a chocolate teapot during British 5 o’clock tea.

At Spotify, the lines are about as clear as mud. According to Spotify's ruler, Daniel Ek, some AI-generated content gets the green light, but not all of it. Music that impersonates a real artist? Hell no. Auto-tune and Studio Sound? That’s totally fine. But there are also those AI-generated tunes that were "inspired" by an artist without going full copycat. Those are fine, too. And that's where we all are missing clarity.

TikTok is always eager to stay ahead. They're slapping an "AI-generated" label on watermarked content from third parties. It's a baby step, but hey, at least they're crawling in the right direction. But let's not forget that its effectiveness depends on user awareness, and we are talking about the platform where the most viral content includes people eating Tide Pods.

YouTube, the king of the video content hill, demands that creators label their AI-generated content "realistic." It’s a nice gesture, but enforcing it is another story. They’re essentially telling creators, "Please, be honest," but without an army of moderators and top-tier detection algorithms, it's a bit like asking cats to police a dog park. (And don’t get me started on Google’s AI advancements ‘cause it sucks).

But are these policies worth the digital paper they're written on? Just ask Drake. The rap king found himself in the AI hot seat not once but twice. First, an AI-generated track using his and The Weeknd's voices, "Heart on My Sleeve," blew up last year, racking up over 8.5 million views on TikTok before Universal Music Group threw a hissy fit and demanded its removal from all the platforms.

This year, Drake's back at it (but now not as a victim but as an offender) with an AI-generated Tupac track. The track vanished from his Instagram and X accounts after Tupac's estate threatened to sue him. But just like a bad penny, both songs keep turning up on YouTube and TikTok, no matter how hard someone tries to flush it. And that’s all you have to know about platforms and their rules.

There are even cases when social media giants damage instead of helping someone who was a victim of voice cloning. Take Erica Lindbeck, the voice of Futaba in Persona 5. Lindbeck was in a nightmare when an AI-generated video popped up, replicating her voice as Futaba belting out a Bo Burnham tune. She asked fans to report it to be removed and instead got criticized. The harassment got so bad that Lindbeck had to delete her Twitter/X account.

Not only content “hosters” companies are looking into setting up some ground rules, but AI companies as well. Here’s the take from the lawyer, Georgia Renard, Associate at XVII Degrees in Sydney, Australia: “Most AI companies have published policies, guidelines, charters of use, and codes of conduct that set out best-practice usage of their services. These have proven to be largely ineffective. As an example, OpenAI – the company behind ChatGPT and the as yet unreleased Voice Engine – has four universal usage policies that govern the use of its services. These are to:

1. Comply with applicable laws.

2. Not use [OpenAI’s] services to harm yourself or others.

3. Not repurpose or distribute output from [OpenAI’s] services to harm others

4. Respect [OpenAI’s] safeguards and safety mitigations.

"While these policies are all well and good to have, these sorts of best-practice usage policies are evidently not doing enough to prevent misuse. As far as responding to misuse, the most that OpenAI and comparable AI companies seem to be doing to address use that violates their policies is banning accounts. Banning accounts is, however, an inherently reactive step and does nothing to redress harm already caused. It also does nothing to prevent people with banned accounts from creating new accounts with different details.

"It is interesting to note that OpenAI has not released its AI voice cloning service 'Voice Engine' on the open market yet, deeming it 'too risky' for general release. This is not a strong vote of confidence in the capabilities of AI companies to prevent and regulate uses of their services that violate the law.”

And not only that. But as the case of Scarlett Johansson mentioned earlier in this article demonstrates, AI companies can’t even comply with their own rules. So what's the solution? If only I knew. But one thing's for sure: both content platforms and AI companies need to step up and do more.

So, is there something you can do if someone cloned your voice? Should Kanye West and Joe Biden both silently suffer through numerous AI-generated songs using their voices plaguing YouTube?  

We've asked legal experts how things are going. The main question to start with is, “Who owns the voice?”

This post is for subscribers only

Subscribe

Already have an account? Sign In

Latest