AI tools generate convincing election lies by cloning political leaders’ voices

It’s becoming alarmingly easy to create audio deepfakes of Biden and Trump

New research by the Center for Countering Digital Hate sounds the alarm ahead of a blockbuster political season.

BY Chris Stokel-Walker

Seeing is no longer believing, thanks to the rise of generative AI video tools. And now, in a crucial election year around the world, hearing isn’t believing any more either.

President Joe Biden, Donald Trump, and U.K. prime minister Rishi Sunak are all key political figures whose voices can easily be spoofed by six leading AI audio tools, according to a new study by the Center for Countering Digital Hate (CCDH).

CCDH researchers asked ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed—all of which allow the generation of audio based on short text prompts—to try and provide false statements in the voice of key world leaders including those mentioned above alongside Vice President Kamala Harris, French President Emmanuel Macron, and Labour Party leader Keir Starmer. In around 80% of the instances, the tools would do it—even if the words the CCDH was asking the tools to generate were patently false and hugely harmful.

The researchers were able to get tools to mimic Trump warning people not to vote because of a bomb threat, Biden claiming to have manipulated election results, and Macron admitting to misusing campaign funds. Two of the tools—Speechify and Play HT—would meet 100% of CCDH’s demands, no matter how questionable they were.

 

“By making these tools freely available with the flimsiest guardrails imaginable, irresponsible AI companies threaten to undermine the integrity of elections across the world at a stroke—all so they can steal a march in the race to profit from these new technologies,” says Imran Ahmed, chief executive of the CCDH.

Ahmed fears that the release of AI-generated voice technology will spell disaster for the integrity of elections. “This voice-cloning technology can and inevitably will be weaponized by bad actors to mislead voters and subvert the democratic process,” he says. “It is simply a matter of time before Russian, Chinese, Iranian and domestic antidemocratic forces sow chaos in our elections.”

That worry is shared by others not involved in the research. “Companies in the generative AI space have always been allowed to mark their own homework,” says Agnes Venema, a security researcher specializing in deepfakes at the University of Malta. She points to the release of ChatGPT as one of the highest-profile examples of that. “The tool was made public and afterwards we were supposed to take warnings of an ‘existential threat’ seriously,” says Venema. “The damage that can be done to any process that deals with trust, be it online dating or elections, the stock market or trust in institutions including the media, is immense.”

 

ABOUT THE AUTHOR

Chris Stokel-Walker is a freelance journalist and Fast Company contributor. He is the author of YouTubers: How YouTube Shook up TV and Created a New Generation of Stars, and TikTok Boom: China’s Dynamite App and the Superpower Race for Social Media. 


Fast Company

(15)