Meta unleashes BlenderBot 3 upon the internet, its most competent chat AI to date
Microsoft grounds its AI chat bot after it learns racism
The young software clearly needs some life lessons.
Microsoft’s Tay AI is youthful beyond just its vaguely hip-sounding dialogue — it’s overly impressionable, too. The company has grounded its Twitter chat bot (that is, temporarily shutting it down) after people taught it to repeat conspiracy theories, racist views and sexist remarks. We won’t echo them here, but they involved 9/11, GamerGate, Hitler, Jews, Trump and less-than-respectful portrayals of President Obama. Yeah, it was that bad. The account is visible as we write this, but the offending tweets are gone; Tay has gone to “sleep” for now.
It’s not certain how Microsoft will teach Tay better manners, although it seems like word filters would be a good start. The company tells Business Insider that it’s making “adjustments” to curb the AI’s “inappropriate” remarks, so it’s clearly aware that something has to change in its machine learning algorithms. Frankly, though, this kind of incident isn’t a shock — if we’ve learned anything in recent years, it’s that leaving something completely open to input from the internet is guaranteed to invite abuse.
Update: A Microsoft spokesperson has provided the statement that BI received. You can read the whole thing below.
“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
(15)