Elon Musk’s repost of a Kamala Harris deepfake shows he’s no free speech warrior

Elon Musk’s repost of a Kamala Harris deepfake shows he’s no free speech warrior

The X CEO claimed his repost of an AI deepfake about Kamala Harris was ‘parody.’ He knows better.

BY Mark Sullivan

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Elon Musk shares a deepfake, proves himself a big fake

Last Friday night, Elon Musk, the Donald Trump-supporting owner of X, decided to repost a deepfake video of the presumptive Democratic presidential nominee Kamala Harris wherein the vice president says she’s a “diversity hire” and doesn’t know “the first thing about running the country.” The video’s creator confirmed to the Associated Press that he used an AI voice-synthesis tool to manipulate the audio found in a Harris political ad.

People who create and post this kind of thing often claim that it was meant as parody or satire, forms of political speech that are protected. And indeed the creator of the faked Harris video labeled it “Kamala Harris Campaign Ad PARODY,” but Musk’s repost didn’t include that text. After people began pointing out that the video repost violated X’s community guidelines, Musk hid behind the “parody” defense, even after removing that label from the deepfake he reposted. 

Not many reasonable people would believe that the voice in the ad was Harris’s. She’d never say those things. But when the owner of a major social platform ignores his own community guidelines and reposts an unlabeled AI deepfake to his millions of followers it sends a message: Deepfakes are okay, and community guidelines are negligible. 

And don’t expect regulators to help combat this element of the misinformation war. “It will be extremely difficult to regulate AI to the point where we can avoid videos like the recent one of Vice President Harris that was shared on X,” says John Powers, a media and design professor at Quinnipiac University. “[T]here should be a stronger push for social media literacy to be introduced to young students as a way to ensure that the next generation is diligent in examining information and news as it comes across their social feeds.”

Musk’s repost is particularly galling when you remember the reasons he gave for buying Twitter in the first place. Twitter had been considered the “town square” for open discussion, especially on political topics. Musk thought it was a place dominated by a “woke” mindset, and intolerant of “conservative” viewpoints, as evidenced by the network’s 2022 ban of the right-wing Christian parody site Babylon Bee. He said in one TED interview that he wanted to make Twitter a “platform for free speech around the globe” and called free speech a “societal imperative for a functioning democracy.”

Is the Harris deepfake Musk’s idea of “free speech?” The man understands AI; he owns and oversees a multibillion-dollar AI company (xAI). He willingly posted disinformation to his 190 million followers, and, when challenged, doubled down with a rather flimsy “parody” defense. Musk now denies he ever said he would give the Trump campaign $45 million per month, but he’s still using his X platform to campaign for Trump, including with Trump’s preferred currency: BS.

How a new era of hyper-personalized AI political ads might work 

Deepfakes aren’t the only way AI could seriously impact an election. Brand marketers and political strategists have long been enticed by the possibility of creating ads that are tailor-fit to a single target individual versus a whole demographic segment. Such an ad might reflect an understanding of the individual’s demographic information, their voting record, and signals from their social media activity about their politics and the key issues they care about. It could also reflect an understanding of an individual’s “psychographic” profile, based on their levels of agreeableness, neuroticism, openness to new experiences, extroversion, and conscientiousness. 

Researchers showed in 2013 that a person’s social “likes” could suggest their “big five” levels. Cambridge Analytica, which worked for the Trump campaign in 2016, believed such levels could accurately telegraph the political issues people were sensitive to (though it’s unclear if that strategy had any effect on the election).

But if such a personality “graph” could be created in a database, it may be possible to present each individual voter profile as a template. This templated information could be presented to a large language model (LLM) as a prompt, and the LLM could be trained to generate ad copy or a fundraising letter that touched on the political issues that they believe will trigger a response. Even rapid-response videos might be generated that refute a claim against a candidate by hitting on the hot-button issues or political sensitivities of a specific voter. 

 

The political-arena sources I’ve spoken to say this idea is more of a holy grail than a usable tool (in part because LLMs can become unstable and give unreliable responses if the prompt contains too much data, in this case about the voter). But we’re still in the early innings of making LLMs more reliable and controllable, so in future elections hyper-personalized ads of all kinds might be possible. 

Study: Anthropic’s Claude 3.5 Sonnet hallucinates less than other LLMs

The enterprise AI company Galileo this week published the results of its annual Hallucination Index for large language models, which ranks the performance of 22 leading generative AI LLMs from companies like OpenAI, Anthropic, Google, and Meta based on their inclination to hallucinate. “As many companies race to create bigger, faster, and more accurate models, hallucinations remain the main hurdle and top fear for enterprise AI teams to deploy production-ready Gen AI products,” the Galileo researchers write in their report. 

The Hallucination Index added 11 models to the framework, representing the rapid growth in both open- and closed-source LLMs in just the past eight months.

Of the 22 tested, Anthropic’s Claude 3.5 Sonnet registered the fewest hallucinations. The closed-source model outperformed its peers across short, medium, and long context window sizes (meaning the amount of data used to prompt the model). Both Anthropic’s Claude 3.5 Sonnet and Claude 3 Opus consistently scored close to perfect scores across categories, beating out last year’s winners, GPT-4o and GPT-3.5, especially in shorter-context scenarios, the researchers say. Google’s small language model Gemini 1.5 Flash is said to be the best performing when price is considered. Google’s open-source Gemma-7b model hallucinated the most. 


ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld 


Fast Company

(7)