Machine-Learning Search Algorithms Can Learn From Human Bias
by Laurie Sullivan@lauriesullivan, August 29, 2016
Computers learning human languages through machine-learning algorithms will also learn human biases, according to research from computer scientists at Princeton. In fact, search engines clearly show the behavior of inherit human-like prejudices that advertisers should become more familiar with.
Advertisers target advertisements based on human behavior. This is not a new phenomenon. Researchers have also published findings on how search results learn from human bias, such as findings on racial bias from a Harvard professor or the paper from Brazil’s Universidade Federal de Minas Gerais. Now there’s one more piece of evidence from researchers at Princeton University.
The new part comes with considering how marketers will target consumers based on the ability for search results and ad targeting platforms to learn from human bias. Machine learning algorithms, the same that power search results on Google, Bing, Yahoo, as well as voice search platforms like Siri, Cortana and Google Now, learn by example. In fact, these companies tout their respective platform’s accomplishments in their ability to learn.
Researchers from Princeton describe how they used a common language-learning algorithm, similar to those found in search platforms, to infer associations between English words. The authors replicated each implicit bias study they tested using the computer model.
For example, female names like Amy, Lisa and Sarah were more often found to be associated with home and family, whereas the opposite was true with male names such as John, Paul and Mike, which more often were found associated with career.
“You can’t instruct the platform not to be biased,” according to Quartz, which revealed the research. Even if researchers could train the platform not to be bias, use from multiple users in a specific society would in time change the results.
There are, however, much broader issues at stake. The researchers show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices. “In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful,” according to the research.
MediaPost.com: Search Marketing Daily
(23)