In 2024, elections are set to take place in at least 64 countries, representing about 49% of the global population. That's a lot to fool. And this time, spreading like fire, AI technologies can potentially influence voters in ways we've never seen before.
Remember that robocall of "Biden" telling New Hampshire Democrats to stay home on primary day? And then there's the Trump campaign's reposts of viral AI-generated images featuring Taylor Swift. And then there’s Elon Musk sharing a deepfake ad featuring Kamala Harris, making some lose their minds viciously arguing that such behavior is not free speech.
Edward Tian, CEO of GPTZero, a tool that so far seems to be able to figure out AI detection of textual information, sees a clear trend: "There has absolutely been an increase in AI misuse as it relates to political misinformation this year, and that’s primarily because AI technology can pretty easily be used by anyone these days." With OpenAI just closing yet another round and raising $6.6 billion, it's pretty obvious that AI tech is here to stay and will only get more advanced.
"The scale of the issue is growing," Tian warns. "Generative AI is almost entirely unregulated, and there are countless platforms available for any person to use. Because there is a major problem with media literacy today, it’s often not very hard for false AI-generated imagery or profiles to be believed without question."
With AI technologies advancing we all are left wondering: Are these instances likely to sway voters in a "significant" way?
Cody Buntain, an assistant professor at the University of Maryland College of Information, offers some perspective: “Probably not. The New Hampshire primary story seems more about de-mobilization (getting voters to stay home), whereas the Trump instances are more about trying to gain new voters. A major risk of deepfakes and AI-generated content is more about uncertainty, so the de-mobilization example is more problematic in my mind.”
So, is AI-powered misinformation harmless or a real threat to elections? Well, it’s too early to jump into conclusions, but one thing's clear: as AI gets smarter, so do the lies. And in times where truth is already on shaky ground, being suppressed or heavily edited, that's something worth contemplating over.
AI global ripples: Impact on elections worldwide
Think AI election interference is just an American problem? It’s actually global.
Let's start with a reality check. A report from the Alan Turing Institute found that out of 112 national elections since January 2023 or upcoming in 2024, only 19 showed signs of AI interference. So far, there's no proof that AI significantly changed any election outcomes. But that doesn't mean we can relax.
Ryan Waite, VP of Public Affairs at Think Big, boasting with the diverse range of clients, including national campaigns, party committees, paints a worrying picture: “In recent elections, such as those in Slovakia and Nigeria, AI-generated deepfakes were used to create fake audio recordings of political figures making controversial statements, with the intent to influence voter perceptions just before the elections. For example, in Slovakia, an AI-generated recording falsely depicted a candidate planning to raise taxes on beer and rig the election. Similarly, in Nigeria, deepfake audio implicated a presidential candidate in election fraud. These incidents underscore how deepfakes can be timed to maximize disruption, especially in close races where even a small shift in voter sentiment can change the outcome.
In the upcoming U.S. elections, similar tactics could be employed, particularly in battleground states or during critical moments in the campaign. The danger lies in the ability of these deepfakes to spread rapidly on social media before they can be debunked, potentially swaying voters who encounter them. Moreover, the existence of deepfakes contributes to what scholars call the "liar's dividend," where politicians or their supporters can dismiss legitimate evidence as fake, further eroding public trust in the authenticity of information.”
Dr. Ashlee Frandell, an assistant professor at UNLV, whose research interests include policy communication and the use of technology in government, particularly artificial intelligence (AI), gives us more examples:
"In Indonesia's recent elections, a presidential candidate used AI-generated content to rebrand their image, presenting themselves in a more favorable light. A similar scenario could unfold in the upcoming U.S .elections, where AI tools might be used by candidates to enhance their public image or discredit their opponents. Additionally, in Brazil, deepfakes were used to spread false narratives during elections, showing how these tactics can be employed to influence voters and potentially alter election outcomes."
Speaking of Brazil, they're not messing around. Daniel Trielli, Assistant Professor of Media and Democracy at the University of Maryland, explains:
"In Brazil, for instance, using deepfakes and generative AI in political campaigns is strictly forbidden. But that reflects how the electoral process is closely regulated at the federal level in Brazil, and they have a different interpretation of freedom of speech. In the U.S., there's no real expectation of governmental content regulation."
So what might this mean for upcoming U.S. elections? Waite warns that we could see similar tactics, especially in those nail-biter battleground states. The U.S. might not go full Brazil with an outright ban on AI in campaigns. But as these digital dirty tricks spread globally, you can bet American lawmakers are taking notes. The 2024 election could be a whole new game, while all involved are still figuring out the rules.
The mechanics and consequences of digital deception
Deepfakes mess with our whole idea of what's real. Right now, in many cases we can figure out what’s real and what’s not. But will it stay that way since the tech is advancing so fast?
Edward Tian, CEO of GPTZero, explains why catching these fakes can become a headache, whether it’s text or video information, even if we are talking about software precision when doing it: “Most AI-detection tools only work for a small number of AI language models, which is a large contributor to inaccuracy in detection. So, we wanted to be able to detect as much as possible – so we created a tool that detects ChatGPT, GPT-4, GPT-3, GPT-2, LLaMA, Gemini, Claude and other AI services based on those models. This allows us to be a lot more accurate. There is definitely a need for tools that can more accurately identify the fakeness of videos and images. I think a reason why this hasn't been as important as AI text detection (other than AI-generated photos/videos only becoming a problem more recently) is because it is often easier for the average eye to spot AI generation in these formats. While people certainly still fall victim to believing these generations, it is still often a lot easier to recognize when things look "off" in a photo or video, versus with text. However, as this technology becomes more sophisticated, those easily identifiable factors will likely become less easily identifiable, so detection software is going to be super important.”
So the big problem? There are tons of different AI models out there, and most detection tools only catch a few of them. Tian's company tries to cast a wider net, but it's a constant game of catch-up.
So what happens when voters can't tell what's real anymore? Dr. Ashlee Frandell from UNLV paints a picture: "If voters are made aware of media manipulation by candidates or other stakeholders, this could confuse and disillusion them, leading to lower voter turnout. Anything that undermines the legitimacy of election results could affect turnout and even the validity of the elections."
Frandell backs this up with some hard numbers: “A Gallup survey shows that in 2024, only 23% of Americans trust the federal government. That's down from 35% in 2022.”
The fear is that AI-powered lies could make this trust problem even worse. When people don't know what to believe, they might just give up on voting altogether.
Guarding "democracy" or making it worse: Tech giants and government action
Of course, the government and tech giants are the first to come to the rescue. But are they actually helping? Let's start with the social media big shots.
Theresa Payton, Former First CIO of the White House, gives us the rundown: “Social media platforms like Google and Meta have implemented some features such as AI detection tools to spot deepfakes and misinformation and are even starting to tag suspicious content with warnings. These platforms are also beginning to experiment with new technologies to spot misinformation faster. However, there is a dire need for more transparency. Social media platforms need to step it up by giving their users more concrete tools to fact-check and verify their content and provide clearer guidelines for content as a whole.
Agencies such as the Cybersecurity and Infrastructure Security Agency are regularly working to protect the elections by combating AI risks like deepfakes and misinformation. The government is making an effort to educate the public, media, and officials about the dangers of AI misinformation and how to catch it before it spreads. They are also looking at ways to regulate AI usage in political campaigns to ensure specific platforms are taking responsibility for content shared on their sites.”
Sounds promising, right? AI tools to spot fakes, warning labels on sketchy posts. But Payton's not impressed (and so am I). She says we need more transparency and better tools for users to fact-check stuff themselves.
Dr. Ashlee Frandell from UNLV adds some specifics: "Meta has introduced AI tools to detect and watermark manipulated media. The Federal Communications Commission (FCC) has taken steps to ban robocalls. The U.S. government's efforts to create a broader regulatory framework are still in development."
It's a start, but is it enough?
Mike Nellis, founder of Quiller.ai and former adviser to Kamala Harris, and, according to his LinkedIn bio, organizer of White Dudes for Harris, doesn't think so: "The U.S. government isn't taking much action at the moment. There have been some positive moves—California recently passed a set of regulations addressing AI and AI-generated content, which were a step in the right direction. The FCC and FEC have also released some helpful guidelines, particularly cracking down on robocalls after the fake Joe Biden calls in New Hampshire.
However, much like how Congress was caught off guard by the rise of social media and the internet, AI is presenting a similar challenge. We need federal leadership—whether from the executive branch or Congress itself—to hold AI companies accountable for the technology they're developing and ensure it's safe. Without action, we could see mass job loss due to AI and robotics, a world where it's impossible to tell what's true and what's fake, and even threats to personal safety, like the ease of creating AI-generated revenge porn.
Right now, it's the wild west out there, and hardly anything regulates the internet, let alone AI. If Congress doesn't step in, these problems will only get worse."
And as I write this piece, the positive moves Mike was hoping for were stopped by a judge in California because the law forbidding election-related memes "does not pass constitutional scrutiny."
So what's a voter to do while the government and tech companies play catch-up? Theresa Payton, former White House CIO, offers some advice:
"During election seasons, voters should stay wary of photo and video content they come across. Always ensure you are going to credible sources and outlets for your information. Remember to keep in mind that if something seems shocking or timed in an 'all too perfect' manner, it should raise a big red flag. A great way to check if something is trustworthy is by going to other credible sources and cross-checking the content. Another method to fact-check your content is using reverse image or video tools to find out if anything has been altered."
Solid tips, but it puts a lot of work on the average voter. And let's be real – how many people are going to run reverse image searches on every political meme they see?
The bottom line? Are tech giants and the government guarding "democracy," or are they inadvertently making things worse? The answer, as with everything else in this article (sorry) isn't clear. What is clear is that without greater transparency and stronger action, we're left to figure things out largely on our own. But maybe it IS for the best?
The future of elections integrity with or without AI
Speaking of figuring out things by ourselves… Let's get to the bottom of the problem here.
It's not about AI-generated memes. So what if an eccentric billionaire posts them or a presidential candidate shares them? Everybody knows Taylor Swift supports the Democratic party. She's been adamant about it for years. If Trump reposts someone else's meme about her, it doesn't mean he wants to deceive anyone. He's looking for likes, shares, and virality. Memes and outrageous content – that's the language younger voters understand. It's just how social media marketing works these days.
So, instead of this pointless AI witch hunt, it might be more helpful to remind ourselves about how elections actually work. In fact, in the US, the popular vote means way less than the Electoral College vote. We're talking about 538 electors representing various states – people mostly unknown to the general public (apart from the few who are actively interested in politics, maybe). These electors ultimately decide who's going to win.
So even with major AI interventions and ballot fraud (as explained in the documentary "2000 Mules" by Dinesh D'Souza, while many corporate media are still trying hard to discredit it, particularly before upcoming US elections, the movie offers well-documented evidence, while critics don’t), that might make people think that particular candidate won the popular vote, ultimately it doesn't matter. The fate of the election is in the hands of those Electoral College voters, the so-called "faceless electors." Their votes mean more than the public's.
When the 538 electors met on December 14, 2020, Biden's margin of victory was greater than his popular vote lead. He received 306 electoral votes, or 56.9% of the total. This was nearly identical to Trump's 2016 win when he defeated Clinton 304-227 despite receiving 2.8 million fewer popular votes.
And let's not forget the 2000 election. Americans learned Gore had won the popular vote by 543,895 votes. But it's winning the Electoral College that counts. Florida's 25 electoral votes went to Bush, and that decided the presidency. A candidate needs to win the majority of electoral votes (270 or more) to become President and that’s all that is to it.
Maybe instead of focusing on AI and fueling hatred towards people who like viral AI memes, we should all be focusing on how elections are not quite democratic as we are taught to believe?
And while AI in politics is somewhat of a concern, the bigger question is whether the current election system in some democratic countries (US included) truly represents the will of the people. Because whether it's Trump or Harris or whoever else, under the current system, your vote doesn’t really count (unless you are one of those “faceless electors”).