Skip to content
AINews

AI Generators Can't Draw Kamala Harris. Intentionally, or..?

Photo by Aidin Geranrekab / Unsplash

Table of Contents

AI image generators have been facing increasing scrutiny lately for their inability to produce convincing representations of Vice President Kamala Harris, first reported by Wired. Recently, a wave of bizarre and inaccurate depictions has drawn public attention, particularly after Elon Musk shared an image through his xAI venture's Grok tool that purported to show Harris as a "communist dictator." The image, featuring a woman in a red suit (spoiler: hardly looking like Kamala), sparked online backlash due to its lack of resemblance to the vice president, with users mocking the failed attempt. NPR also noted that an image generator on the platform X struggled to accurately portray Harris's facial features and that vice president's representations they generate look unnatural (perhaps, even more unnatural than AI is usually capable of).

Social media users were quick to comment on the botched effort. One X user quipped that the Grok-generated figure looked more like actress Eva Longoria or even Michelle Obama than Harris. The criticism didn't stop there, with many pointing out that Grok consistently struggles to capture Harris' likeness, a shortcoming not seen with other public figures like Donald Trump.

And even if it's understandable with Midjourney and OpenAI's ChatGPT that have strict guidelines that prevent users from generating images of political figures, open-source models also seem to struggle with Harris. The nonprofit Center for Countering Digital Hate reviewed policies of well-known AI image generators including ChatGPT Plus, Midjourney, Microsoft’s Image Creator and Stability AI’s DreamStudio and discovered that all of them ban "misleading" content, with most also restricting images that could undermine "election integrity." ChatGPT further prohibits generating images of political figures.

In a test by Wired, Grok successfully generated images of Trump but repeatedly fumbled attempts to depict Harris. These inaccuracies often result in depictions of her with inconsistent features, hairstyles, and skin tones, leading to sometimes outright confusing renditions (yes, like those looking suspiciously like Michelle Obama or Hollywood actresses). Why is that, though?

Even though some argue that these incorrect depictions might be intentional, the reason for it might be a bit more prosaic; some Redditors put it as "garbage in, garbage out," meaning that AI simply lacks high-quality training data. Some experts agree and suggest that AI models have fewer well-labelled images of Harris in their datasets compared to other prominent figures. A comparison of photo databases supports this theory: Getty Images, for example, has over 561,000 images of Trump but only around 63,000 of Harris, as Wired reports.

Additionally, experts point to well-documented issues with facial recognition systems, particularly when identifying darker skin tones. This problem, first addressed in a 2018 study, may contribute to the AI's inability to properly recognise and replicate Harris' face. AI image generators like DALL-E and Midjourney struggle to accurately depict people with dark skin tones, as proven by several tests. And these inaccuracies don't apply to only facial features or skin tones. As such, a Brookings Institution analysis found that AI image generators tend to reproduce stereotypes and lack diversity in their outputs. For example, Stable Diffusion was found to generate images depicting women of colour primarily as fast food workers or in other service jobs.

A Reddit user also noted that when trying to generate images of software developers, the majority depicted white males, with few women or people of colour represented. The bias in AI image generation is likely due to the training data used, which may not adequately represent diverse populations. The algorithms learn from the data provided and can amplify existing societal biases.

Credit: Brookings Institution

Sure, AI companies are working to address these issues by implementing techniques to increase diversity in image outputs but these efforts have not always been successful. For example, Google's Gemini tool was criticised for generating racially diverse but historically inaccurate depictions of World War II-era German soldiers.

Credit: Google / the Verge

Comments

Latest

Decoding Punk Rock's Scientific DNA

Decoding Punk Rock's Scientific DNA

There were quite a few punk rockers who deliberately traded their studded jackets for lab coats, and that fact made us curious: is getting a PhD the ultimate act of rebellion, or are punks always secretly super intelligent?