AI is creeping into everything these days and documentaries are no exception. But Netflix's recent hit "What Jennifer Did" sparked such an outrage that the streaming service's management wasn’t expecting. Viewers noticed something fishy. The audio was too crisp. The photos looked oddly enhanced. Social media exploded with accusations. Netflix had used AI, they said. In a true crime doc. About real people and real victims.
For a while Netflix stayed quiet. Then their CEO Greg Peters spoke up. During an earnings call, he said AI offers "new tools to creators to allow them to tell their stories in more compelling ways." But is it the truth or just a convenient excuse to be lazy and get away with somewhat inaccurate representation of facts?
"What Jennifer Did" isn't alone. Adobe is selling fake AI images of the Israel-Hamas war. Real conflict, fake photos. Another Netflix hit, the movie "Under Paris" used AI for underwater shots (something that you don’t really need AI for).
Chris Joseph, creator of the Florida Man Murders podcast, doesn't sugarcoat the issue: "There are innumerable ways for a filmmaker or documentarian to re-create or create the proper scenes needed to make a true crime doc work, or give it that extra kick. You just need to be willing to put in the work. Get creative. Get your hands dirty. That's what makes creating something so exciting. A filmmaker can recreate, say, an environment, with their iPhone, and cheap lighting. See, this is what AI does – it takes creativity, the real work, out of the hands of the creator. It strangles imagination. That's not art."
But as AI gets better, the line between real and fake blurs. And it’s getting more challenging to spot the fake. Documentaries and news are supposed to show the truth. Now we're left wondering: What's real? What's AI? And how is it even allowed?
Read also: How to Make Sure a Voice Is AI-Generated? It's Not Simple But There's a Way
Dark side of using AI in nonfiction
The potential benefits of using AI? They're tempting. It could help research faster. Visualize complex ideas. Fill gaps where footage is missing. And even resurrect dead actors. Sounds great, right?
But hold up. There's a dark side. AI can manipulate. Distort. Fabricate. In a world where truth is already hard to come by, do we really need more layers to uncover it?
According to a recent study, at least 10% of all research may already be co-authored by AI. Many writers, myself included, are guilty of wanting to work faster and sometimes relying too much on anything AI might spit up at us. Especially when it is advertised as a help for research and something that has a real time (or almost real time) access to all the information in existence. Here’s a quick test of reliability though. I asked AI what color shirt Manson wore during one of his trials. AI confidently answered white long-sleeved shirt and buckskin jacket. With links and everything. Sounds plausible. But even when clicking through the links suggested in the research we can see that Manson wore a blue jean shirt. AI can sound convincing while being dead wrong. And if working on nonfiction writing over relying on it could be dangerous.
Chris Joseph, from Florida Man Murders podcast has a thing or two to say about it: "Manipulating images or audio in a supposed nonfiction work – where truth and transparency are paramount – is despicable. By definition it ceases to be nonfiction. It's also just a matter of ethics. One of the most egregious examples of this was with the 2021 Anthony Bourdain documentary Roadrunner, released three years after his death, where the filmmaker admitted to creating an AI model to recreate Bourdain's voice for certain scenes. The filmmaker got a lot of flack for it, and deservedly so."
Harsh? Maybe. But he's got a point. The line between recreating and fabricating is thin. And once you cross it, can you ever go back?
Creators dilemma
The pressure and expectations of the industry is crazy high. And for obvious reasons. If back in 1915, Hollywood churned out 2 movies per year, this number was increasing rapidly: from 37 in 1955 and 211 in 2000 to 1,480 in 2020. So AI can potentially help to keep up. AI's writing scripts now. Runway's AI can create video faster than you can say "cut." Or LALAL.AI that allows anyone to apply a voice changer to any recording so you can make Taylor Swift voice over any documentary free of charge.
Historically, every aspect of movie-making required painstaking manual labor. From shooting scenes to editing, the process was slow and labor-intensive. Today, AI can analyze footage, detect patterns, and edit clips automatically. It can even generate video summaries, making post-production faster and more efficient. Filmmakers can now produce more content in less time, keeping up with the relentless demand for new material.
However, this shift comes at a significant human cost. And by 2026 118,500 jobs in entertainment could be cut due to AI. Jobs in visual effects and postproduction are particularly at risk.
What's a filmmaker to do? Stick to tradition and get left in the dust? Or just go with the flow and use all of those helpful tools? But when does enhancing become fabricating? When does efficiency become laziness? I have so many questions and I'm not sure what the answers are.
Lamont Pete, Executive Producer for unscripted television, CEO of W2D says it's about balance: "AI can be a useful tool for enhancing storytelling, like visualizing difficult concepts or recreating historical environments, but it should not be used to change the facts. Transparency is key—let the audience know when AI is involved. Avoiding AI altogether isn't necessary, but its use must be carefully managed to maintain credibility."
Read also: My Voice Was Cloned: Seeking Justice in the AI Era
Viewer's perspective and AI impact on public trust
People already struggle to trust the media. Throw AI into the mix, and you’re asking for trouble. Consumers are not dumb. They know when something feels off. Especially when they expect real stories, crafted by real people. The backlash against AI-generated content is growing.
And the case of “What Jennifer Did” is just one example. Viewers reaction? "If they're using AI, it's not ethical. Pretty cut and dry."
It's not just Netflix. Sports Illustrated recently got caught using AI-generated writers. The results? The CEO got fired. And how about a German magazine Die Aktuelle publishing an AI generated interview with Michael Schumacher, who hasn’t been doing any interviews since his skiing accident in 2013.
The history channel Kings and Generals used AI to generate scripts, which led to inaccuracies. Their audience was furious.
Viewers want the truth. Without proper guidelines, AI use can seriously damage public trust. Viewers expect factual accuracy. If they start questioning that, the whole genre suffers.
So what's the solution?
Chris Joseph believes that creators should always disclose if they’ve used AI in all forms, but especially in nonfiction work: “Any creator should be willing to have an open conversation about how the sausage gets made. And if a creator isn’t willing to be transparent about it, it’s probably because they know, deep down, that what they created is pure hack. If filmmakers don't disclose their use of AI, they risk alienating their audience. Take the Roadrunner documentary about Anthony Bourdain. When people found out Bourdain’s voice was recreated with AI posthumously, they were pissed. It felt like a betrayal. "This is a good thing. Because it’s downright problematic at every level. Ultimately, AI will be the end of art and truth as we know it – two vital things that make societies function. It’s anti-art, anti-truth, anti-beauty. It’s anti-human."
Lamont Pete agrees: "Transparency is crucial. At a minimum, use of AI should be acknowledged in the credits. Additionally, detailed disclosures can be provided on official websites or in accompanying materials, explaining how AI was used and ensuring that it did not alter the factual content. Without proper guidelines or disclosure, using AI can severely damage public trust. Viewers expect factual accuracy from nonfiction media, and any manipulation can lead to skepticism. This could erode trust not just in individual works but across the entire genre of factual media."
The future of AI in nonfiction
As AI is getting smarter, its role in documentaries is growing. And we probably should make our peace with it. Right now, it’s used for filling gaps, recreating scenes, and enhancing visuals. But the future? It looks even more AI-heavy. Imagine documentaries where AI not only recreates past events but also predicts future scenarios. It sounds like sci-fi, but we’re heading there fast.
So where's this going exactly? More AI? More fake footage? More "enhanced" reality?
What's next? AI-generated interviews with dead people? Oh wait, that already happened.
So when does AI cross the line in nonfiction? It happens when it starts to alter the truth. Documentaries are supposed to inform, not mislead. If AI begins to create its own version of events, the integrity of nonfiction is at risk.
But I try to look for the bright side too. AI has the potential to make documentaries more engaging. Interactive features, personalized content, and immersive experiences could be the norm. But this only works if the audience trusts the content. These days many viewers are smart, skeptical, and demand transparency. AI can enhance content, but viewers need to know when it’s being used. If audiences feel deceived, they’ll turn away.
One thing's certain: AI in nonfiction isn't going away. It's only getting stronger, smarter, more pervasive. The question isn't if it will change documentaries. It's how much. And whether we'll be ready when it does.