Google To Start Limiting Election-Related Queries In 2024
Google To Start Limiting Election-Related Queries In 2024
Google said on Tuesday it will restrict some types of election-related queries in which its chatbot Bard and search generative experience (SGE) can respond.
The global restrictions will take effect by early 2024 in time for the elections in India, United States, and South Africa. Google said it would use AI more to serve voters and campaigns related to these elections.
Some of those approaches include policies in place that are intended to safeguard against the manipulation of media. During a recent online class hosted by the Mobile Marketing Association, Rex Briggs, co-author of the book “The AI Conundrum,” demonstrated how someone could manipulate photographs and dialogue in media.
During the training, Briggs took a scene from the movie “Fall” by Lionsgate, which trended toward the movie being rated “R.” He showed how creators changed the dialogue to reclassify the content and appeal to a broader audience in one-tenth of the cost in less than a week.
Google has long-standing policies that inform how the company approaches areas including manipulated media, hate and harassment, incitement to violence, and false claims that could undermine democratic processes.
“For over a decade, we’ve leveraged machine learning classifiers and AI to identify and remove content that violates these policies,” Jasper wrote. “Now, with the recent advances in our Large Language Models (LLMs), we’re experimenting with building faster and more adaptable enforcement systems. Early results indicate that this will enable us to remain nimble and take action even more quickly when new threats emerge.”
Jasper also said Google is helping people identify AI-generated content through several tools and policies. It requires those advertising with election-related content to disclose when ads include realistic synthetic content.
In the coming months, YouTube will require creators to disclose when they have created realistic altered or synthetic content, and will be requires to display a label that indicates when the content they’re watching is synthetic.
There is an About this Results in Search Generative Experience (SGE) and a double-check feature in Bard to help people evaluate whether content across the web can substantiate Bard’s English-language response. About this image in Search is in place to help people assess the credibility and context of images found online. And digital watermarking, SynthID, a tool in beta created by Google DeepMind, embeds a digital watermark into AI-generated images and audio.
Google shared the update shortly after AI Forensics released a report that found Microsoft Copilot, previously Bing AI chatbot, gave inaccurate answers to one out of every three basic questions about electoral candidates, polls, scandals and voting in a pair of recent election cycles in Germany and Switzerland. The chatbot misquoted its sources in some cases.
The report, “Generative AI and elections: Are chatbots a reliable source of information for voters?”, published by the European non-profit that investigates influential and opaque algorithms, along with other organizations in Germany and Switzerland — questions whether chatbots can provide accurate information about elections.
The chatbot’s inconsistency occurred consistently, according to the report, which states that the answers did not improve over time.
Inaccuracies included giving the wrong date for elections, reporting outdated or mistaken polling numbers, listing candidates who had withdrawn from the race as leading contenders, and inventing controversies about candidates in a few cases.
(15)