Google’s DeepMind: When AI Can Contemplate Competition Or Cooperation
Google’s DeepMind: When AI Can Contemplate Competition Or Cooperation
by Laurie Sullivan, Staff Writer @lauriesullivan, February 15, 2017
Creating an automated, programmatic ad platform built on artificial intelligence and machine learning that draws in data from across a network of services to serve the perfect advertisement that entices consumers to pull the trigger on purchases shouldn’t be difficult for Google.
What’s more difficult is teaching those AI-learning algorithms to become socially responsible. Yep, that’s correct. Along with the promise of serving the correct ad based on tons of data, Alphabet’s DeepMind AI team of developers have been contemplating the idea of social dilemma, which Wikipedia defines as when an individual profits from selfishness unless everyone chooses the selfish alternative.
DeepMind researchers have been working with two games to test whether neural networks are likely to understand motivations to compete or cooperate. They outline the concept and research in a report titled Multi-agent Reinforcement Learning Sequential Social Dilemmas.
The hope is that the research can lead to AI learning enough to gain the ability to work better with other machines in situations where the data isn’t perfect whether in transport networks or stock markets, reports Bloomberg Technology. But what about ad-serving networks, in those instances when compromises are required?
In one of the more interesting examples, the researchers “characterize how learned behavior in each domain changes as a function of environmental factors including resource abundance.” The experiments show how conflict can emerge from competition over shared resources and sheds light on how the sequential nature of real-world social dilemmas affects cooperation, the researchers wrote in the paper.
Two AI agents — one red and the other blue — in the first game were tasked with gathering the green apples, as described in the blog post. Each had the option of tagging the other, which would remove the tagged player from the game. The researchers ran the game thousands of times. The two AI agents were willing to cooperate as long as there was an abundance of apples. But once the apples became scare, the agents became more aggressive.
How do you think that might play out in an ad-serving platform, even when AdWords doesn’t have all the information to make an informed decision?
MediaPost.com: Search Marketing Daily
(61)