10 ways social media platforms can fight election disinformation
Approaching the U.S. presidential election, social media platforms have been feverishly introducing new measures to curb disinformation. Twitter announced the suspension of all political advertising, the addition of warning labels on tweets containing misleading information and their deamplification, and limits to how users can retweet. Facebook also announced the suspension of political advertising (though much later). In September it started taking down and labeling posts that tried to dissuade people from voting. Both platforms have started aggressively banning QAnon. They have also removed or labeled some posts by President Trump containing false information and declared that they would take down any content attempting to wrongly claim election victory. YouTube, despite being a key platform of misinformation, has remained fairly quiet.
Consumer doubt is not limited to information related to political campaigns: A recent report from researchers at Avaaz found that Facebook is “an epicentre of coronavirus misinformation. . . . Of the 41% of this misinformation content that remains on the platform without warning labels, 65% had been debunked by partners of Facebook’s very own fact-checking program, [and] over half (51%) of non-English misinformation content had no warning labels.”
The battle against misinformation on social media will of course continue beyond the 2020 U.S. presidential election. With more than 500 million tweets a day, 500 million Facebook stories shared daily, and 30,000 hours of content per hour newly uploaded on YouTube, ensuring the accuracy of user-generated content on social media platforms is a herculean task. But additional measures could limit disinformation, make echo chambers more porous, and promote high-quality reliable information that encourages constructive interactions.
Slow the production of false information
Fact-check and remove false information
Limit the reach
While freedom of speech is likely to remain an absolute in the U.S. (Europe introduced exceptions to it a long time ago), a more realistic way of decreasing disinformation would be to limit the freedom of reach.
There is so much more that can be done that won’t affect the upcoming elections but will, in the long term, contribute to decrease gangrenous misinformation: Further invest in automated vetting systems, develop crowdsourced fact-checking, implement “scan-and-suggest” features, introduce more context, systematically correct the record, open datasets and algorithms to the scrutiny of researchers and NGOs, properly pay traditional media for high-quality content, etc. Beyond all of these measures, the most important action social media platforms can take is to push back on the noxious notion that facts and truths are political matters: Stop pretending to be neutral bystanders and stand up for truth.
Maelle Gavet has worked in technology for 15 years. She served as CEO of Ozon, an executive vice president at Priceline Group, and chief operating officer of Compass. She is the author of Trampled by Unicorns: Big Tech’s Empathy Problem and How to Fix It.
(62)