MIT study: Correcting falsehoods on Twitter makes misinformation worse in surprising ways
“What we found was not encouraging,” says coauthor Moshen Mosleh, a research affiliate at MIT’s Sloan School of Management, whose study appeared this week. Polite corrections to factually inaccurate tweets set off an avalanche of further misinformation and toxic language. “They retweeted news that was significantly lower in quality and higher in partisan slant, and their retweets contained more toxic language.”
The researchers targeted 2,000 Twitter users from a range of political persuasions who had tweeted 11 overtly false news articles. (Sample article topic: One time Donald Trump evicted a disabled veteran because of his therapy dog.) After an extremely polite correction in the thread, which included a link to factually accurate information, the tweeters’ accuracy declined further—and even more so when they were corrected by someone matching their political leanings. This indicates that partisanship is not driving the tweeters’ responses.
“We might have expected that being corrected would shift one’s attention to accuracy,” says coauthor David G. Rand, a professor at the MIT Sloan School of Management. But no! “Instead, it seems that getting publicly corrected by another user shifted people’s attention away from accuracy—perhaps to other social factors such as embarrassment.”
A March study in Nature showed that private, gentle, and neutral reminders about accuracy can have some positive effect, by “subtly shifting attention to accuracy.” Because, believe if or not, the tweeters pumping out misinformed tweets are generally focused on factors other than accuracy. More research is needed to draw firm advice, but in the meantime, Rand suggests that a “post about the importance of accuracy in general without debunking or attacking specific posts” may help focus friends on accuracy, and improve future posts.
(21)