Can tech companies protect elections from AI-powered manipulation?
The widespread availability of AI-powered tools that can mass-produce disinformation poses a serious threat to democracy around the world.
This year’s elections are the first with generative AI tools that can be as easily used by individuals as large organizations to produce misinformation at scale. Even as tech companies push out ways to prevent and detect false content, there are serious concerns that they won’t be enough.
Why we care. The impact of this is unlikely to be limited to political marketing. Consumers — especially younger ones — are increasingly distrustful of advertising. A recent U.K. study found only 13% of the population trusted ad executives. The driving forces include the rise of “fake news” and anti-vaccination campaigns. The spread of political misinformation will erode trust on other topics as well.
(February 17, 2024), as the Iowa caucuses kicked off the U.S. primary season, OpenAI became the latest company to take steps to protect voters. The company said people aren’t allowed to use its tools for political campaigning and lobbying to create chatbots that impersonate candidates, other real people and local governments.
Restricting election-related queries
“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies and improving transparency,” the company said in a blog post.
Last month Google said it would restrict election queries to its AI chatbot, while Facebook parent Meta barred campaigns from using AI advertising tools last year.
OpenAI also said it plans to incorporate digital credentials from the Coalition for Content Provenance and Authenticity (C2PA) into images generated by Dall-E. Microsoft, Amazon, Adobe and Getty are also working with C2PA on this issue.
While these are good actions, the problem is they only apply to those companies’ products, said Chris Penn, co-founder and Chief Data Scientist at TrustInsights.ai.
“You’re going to see generative AI used and abused in frankly unbelievable ways,” said Penn. “We have a whole bunch of state and non-state actors who would love nothing more than to completely screw over the US election process and the tools that you use to build these generative deceptions are freely available. They cannot be regulated because they’re open source.”
However, companies are also taking actions that, given the popularity of their platforms, could mitigate some of the misinformation.
More attribution
OpenAI is banning apps that could discourage voting — by saying a vote was meaningless, for example. It is also directing questions about how and where to vote to CanIVote.org, operated by the National Association of Secretaries of State. The company said it is increasingly providing links and attribution to news reporting to help voters assess the reliability of the generated text.
Google is requiring political ads using significant AI content to have warning labels like:
- “This image does not depict real events.”
- “This video content was synthetically generated.”
- “This audio was computer generated.”
- “This image does not depict real events.”
The real solution is educating the public about what genAI can do, said Paul Roetzer, CEO of The Marketing AI Institute.
“Society has to level up understanding what these tools are capable of,” said Roetzer. “The average citizen has no idea that you can build an image or create a video that looks real. They are going to believe the stuff they see online, and you can’t believe anything you see online.”
Unfortunately, it may be too late for that to help with the 2024 elections. It’s important to note that this is a worldwide issue: High-stakes elections are being held in more than 50 nations this year, including the U.K., India, Mexico, Pakistan and Indonesia.
The post Can tech companies protect elections from AI-powered manipulation? appeared first on MarTech.
(24)