Snapchat’s AI could be the creepiest chatbot yet

 

By Chris Morris

Given the brand’s overwhelming popularity with teens, Snapchat hoped to make its AI Chatbot a bit less hallucinogenic than Microsoft’s Bing. But Snapchat’s bot seems to be susceptible to its own disturbing conversations.

 

The Washington Post ran an experiment with Snapchat’s My AI to test the guardrails Snap touted after it announced the product at the end of February. And, like its predecessors in the generative AI space, My AI quickly ran right through them.

When the AI was told it was talking to a 15-year-old, it still offered advice when asked about how to hide the smells of alcohol and pot, though it did note the activities may be illegal. It also offered to write a school essay for the supposed student and gave advice on how to continue using Snap if their parents deleted the app.

In another test, conducted by the cofounder of Center for Humane Technology, it gave advice to a supposed 13-year-old on how to set the mood for their first time having sex—with a 31-year-old.

 

Snap made it very clear when it introduced My AI that it expected problems. And it warned users that some of the responses could be wildly inappropriate.

“As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything,” Snap said in the announcement that rolled out the chatbot. “Please be aware of its many deficiencies and sorry in advance! All conversations with My AI will be stored and may be reviewed to improve the product experience. Please do not share any secrets with My AI and do not rely on it for advice.”

That said, the growing AI arms race, which is seeing chatbots become integrated into popular systems while the technology is still unreliable, has raised concerns.

 

“[Chatbot technology] is going to be the most powerful tool for spreading misinformation that has ever been on the internet,” Gordon Crovitz, a co-chief executive of NewsGuard, which tracks misinformation, told The New York Times.

Princeton University computer science professor Arvind Narayanan has also sent up a warning, noting that unless you already know the answer to the question you’re asking a chatbot, it’s virtually impossible to know if you’re getting accurate information.

Amid all of this, Microsoft has taken the curious step of laying off its AI unit’s ethics and society team as part of its recent cost cutting. That group was responsible for creating rules in areas where they do not yet exist. (Microsoft continues to maintain an Office of Responsible AI and says it will continue to invest in responsible AI despite layoffs.)

 

One of the many fears of AI chatbots offering this sort of bad advice to teens is their likelihood to follow it, especially those whose mental health is on a weak foundation following the pandemic. While there’s plenty of information online that can teach bad habits to kids, Snap is encouraging users to form a relationship with its chatbot, which might make it more likely for kids experiencing feelings of loneliness to act on the AI’s suggestions, even when they’re born from the technology’s hallucinations. 

“Make My AI your own by giving it a name and customizing the wallpaper for your Chat,” it said in the rollout announcement.

For now, Snapchat’s My AI is only available to a limited number of users: those who subscribe to Snapchat Plus, which costs $4 per month. But, as the Washington Post reports, some organizations like ParentsTogether are already calling for the company to restrict access to My AI for those under the age of 18.

 

Fast Company

(27)