Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

admin
Pinned August 26, 2022

<> Embed

@  Email

Report

Uploaded by user
Meta unleashes BlenderBot 3 upon the internet, its most competent chat AI to date
<> Embed @  Email Report

Meta unleashes BlenderBot 3 upon the internet, its most competent chat AI to date

Microsoft grounds its AI chat bot after it learns racism

The young software clearly needs some life lessons.

Jon Fingas
J. Fingas
 
Meta unleashes BlenderBot 3 upon the internet, its most competent chat AI to date | DeviceDaily.com

Microsoft’s Tay AI is youthful beyond just its vaguely hip-sounding dialogue — it’s overly impressionable, too. The company has grounded its Twitter chat bot (that is, temporarily shutting it down) after people taught it to repeat conspiracy theories, racist views and sexist remarks. We won’t echo them here, but they involved 9/11, GamerGate, Hitler, Jews, Trump and less-than-respectful portrayals of President Obama. Yeah, it was that bad. The account is visible as we write this, but the offending tweets are gone; Tay has gone to “sleep” for now.

It’s not certain how Microsoft will teach Tay better manners, although it seems like word filters would be a good start. The company tells Business Insider that it’s making “adjustments” to curb the AI’s “inappropriate” remarks, so it’s clearly aware that something has to change in its machine learning algorithms. Frankly, though, this kind of incident isn’t a shock — if we’ve learned anything in recent years, it’s that leaving something completely open to input from the internet is guaranteed to invite abuse.

Update: A Microsoft spokesperson has provided the statement that BI received. You can read the whole thing below.

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics   

(15)