Did AI Teach YouTube Search Feature To Autocomplete Disturbing Child Abuse Terms?

Did AI Teach YouTube Search Feature To Autocomplete Disturbing Child Abuse Terms?

by Laurie Sullivan @lauriesullivan, November 28, 2017

A week after YouTube published a blog post on the company’s tough approach to protect kids, YouTube is investigating reports that the autocomplete feature in its search engine is suggesting disturbing child abuse terms.

The latest blow to Google’s online video service comes days after brands like Hewlett-Packard, Mars, Lidl and Adidas pulled their advertising from Google and YouTube, alleging that predatory comments were found near videos of children. Some suggest the terms could be giving YouTube’s AI a bad lesson in search.

Did AI Teach YouTube Search Feature To Autocomplete Disturbing Child Abuse Terms? | DeviceDaily.com

“On Sunday our teams were alerted to this profoundly disturbing autocomplete result and we worked to quickly remove it as soon as we were made aware,” said a YouTube spokesperson. “We are investigating this matter to determine what was behind the appearance of this autocompletion.”

YouTube estimates that it has removed ads from 2 million videos and more than 50,000 channels that featured disturbing content aimed at kids. Some of that content exploited children in the videos.

Malicious acts that teach artificial intelligence to serve disturbing content higher in search results could become the next challenge for companies like Google, Microsoft and others that have web crawlers aggregating and indexing content, aimed at those searching for information across the internet.

Algorithms learn from what people search for on the site. One report suggests users searched for “how to have” and the autocorrect feature in YouTube’s search engine filled in the sentence with “s*x with your kids.”

Admittedly, Search Marketing Daily could not replicate the results. Typing “how to have” into YouTube’s search box served up choices such as “a lucid dream,” “your first kiss,” “good handwriting” and “clear skin.”

Jonathan Kagan, senior director of search and biddable media at MARC USA Results:Digital, said people are uploading inappropriate content and mislabeling it, which is part of the issue that occurred in March where they showed questionable and hate content next to brand advertisements.

“It’s quite easy to mislabel things,” he Kagan. “You just label it whatever you want.”

Kagan suggests there are a few instabilities at work. Other than mislabeling, he said YouTube changed the monetization rules that a video must have 10,000 views, which helped them find the mislabeled content and remove it. It also comes down to advertisers reviewing their reports consistently to find the error. Many do not, he said.

“All it takes is one brand to announce their findings and 500 to pull the plug,” Kagan said. “There are possible algorithm issues that help people proceed from one inappropriate video to the next because the algorithm keeps learning from that activity.”

Kagan said it’s similar to what happened during the presidential elections when Donald Trump accused Google of being pro-Hillary Clinton because the autofill feature on google.com would suggest something less flattering when typing “Donald Trump” into the search engine, compared with typing in “Hillary Clinton.”

MediaPost.com: Search Marketing Daily

(44)