Yale researchers say social media’s outrage machine has the biggest influence on moderate groups
A new study out of Yale University suggests the reason that your Facebook and Twitter feeds are now laden with scathing political diatribes and lengthy personal commentary is because we’ve been subtly trained to post those, through a system of rewards powered by “likes” and “shares.” Simply put, because content with “expressions of moral outrage” is more popular, we publish more of it.
For the study, the team of researchers built machine-learning software capable of identifying moral outrage in Twitter posts, which then trawled nearly 13 million tweets from over 7,000 users. After tracking the users’ pages over time, they discovered those who racked up more “likes” or “retweets” after showing outrage were more likely to keep doing so in future posts. This was subsequently backed up by controlled behavioral experiments conducted by the team.
But surprisingly, the finding was not confined to political echo chambers, where you would naturally expect slanted voices to grow louder. It’s true, the researchers say, that users with more politically radical networks spewed more outrage overall than those with more center-of-spectrum networks. But the rewards system had the greatest influence on the latter: “People with politically moderate friends and followers are more sensitive to social feedback that reinforces their outrage expressions,” Molly Crockett, a Yale associate professor, said in a statement. “This suggests a mechanism for how moderate groups can become politically radicalized over time . . . [through] feedback loops that exacerbate outrage.”
While moral outrage can be a tremendous force for good, driving revolutionary movements in history spanning civil rights to animal cruelty to campaign finance reform, it can also be a double-edged sword, fueling political polarization and disinformation and even resulting in harassment of targeted minority groups. And social media mega-platforms like Twitter, Facebook, and Reddit have come under fire in recent years as critics claim they’ve let the dark sides of extremism fester. That’s led to a burgeoning push for regulation. While the companies argue they’re merely passive conduits for conversations that would otherwise take place elsewhere, this study suggests the algorithms they employ, which encourage users to post more popular content, could be playing a more active role.
The study’s authors took no stance on whether amplifying moral outrage in society is a clear positive or negative, as it’s hardly black or white. It is, however, “a clear consequence of social media’s business model, which optimizes for user engagement,” said Crockett. “Given that moral outrage plays a crucial role in social and political change, we should be aware that tech companies, through the design of their platforms, have the ability to influence the success or failure of collective movements . . . [they] change how users react to political events over time.”
(49)