Google is reportedly testing an AI tool that can generate news articles
Google’s Bard chatbot confidently spouts misinformation in Twitter debut
In the advertisement (via Reuters), a short GIF shows an example of a Q&A with Bard. “What new discoveries from the James Webb Space Telescope can I tell my 9-year old about?” the query reads. The machine quickly spits out three ideas, including the last one that says, “JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called ‘exoplanets.’ Exo means ‘from outside.’” Although the bit about exoplanets is spot-on, the first part saying the JWST took the first pictures of them is false. That honor belongs to the European Southern Observatory’s Very Large Telescope (VLT) in 2004, as confirmed by NASA.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics ? https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
— Google (@Google) February 6, 2023
Although incorrect information in a Twitter ad won’t likely hurt anything directly, it’s easy to view the mistake as an omen of the risks of releasing natural-language chatbots into the wild. It parallels CNET’s decision to write financial advice articles with an AI chatbot; they were also riddled with errors.
Because chatbots get so much right — and spit out answers with such supreme confidence — anyone who doesn’t fact-check their responses may be left with false beliefs. Considering the chaos that (non-AI-powered) misinformation has already let loose on society, releasing this often mind-blowing technology before it can be trusted to produce factual information reliably and consistently — even escaping Google’s copy editors — may have us in for a wild ride.
(14)