10 ways social media platforms can fight election disinformation

By Maelle Gavet

Approaching the U.S. presidential election, social media platforms have been feverishly introducing new measures to curb disinformation. Twitter announced the suspension of all political advertising, the addition of warning labels on tweets containing misleading information and their deamplification, and limits to how users can retweet. Facebook also announced the suspension of political advertising (though much later). In September it started taking down and labeling posts that tried to dissuade people from voting. Both platforms have started aggressively banning QAnon. They have also removed or labeled some posts by President Trump containing false information and declared that they would take down any content attempting to wrongly claim election victory. YouTube, despite being a key platform of misinformation, has remained fairly quiet.

These measures continue to feel too little too late, mere window dressing at a time when an increasing number of U.S. adults get their political news primarily through social media. Fifty-five percent of Americans find it harder to identify misleading information during the presidential campaign in 2020, compared to 2016, and 75 percent have little confidence in Facebook, Twitter, Google, and YouTube to prevent the misuse of their platforms to influence these elections.

Consumer doubt is not limited to information related to political campaigns: A recent report from researchers at Avaaz found that Facebook is “an epicentre of coronavirus misinformation. . . . Of the 41% of this misinformation content that remains on the platform without warning labels, 65% had been debunked by partners of Facebook’s very own fact-checking program, [and] over half (51%) of non-English misinformation content had no warning labels.”

The battle against misinformation on social media will of course continue beyond the 2020 U.S. presidential election. With more than 500 million tweets a day, 500 million Facebook stories shared daily, and 30,000 hours of content per hour newly uploaded on YouTube, ensuring the accuracy of user-generated content on social media platforms is a herculean task. But additional measures could limit disinformation, make echo chambers more porous, and promote high-quality reliable information that encourages constructive interactions.

Slow the production of false information

    Stop political ads now. Taking the lead from Twitter, the major platforms could implement the equivalent of a campaign silence for advertising, which may be the only option to decrease the volume of disinformation until the final results are announced.

    Improve the initial vetting of any paid content. Paid content, as well as ads, must be subject to stringent vetting: Organizations should not be able to pay for false information to be placed in Facebook and YouTube feeds.

    Stop monetization. A large portion of false information is created for purely financial purposes. Social media platforms need to cut off the ad-revenue oxygen for the creators of this content, disabling AdSense or similar automatic payment functionality for any content flagged as false.

Fact-check and remove false information

    Implement a stricter system for verifying profiles. A large volume of false information is created and propagated by a few thousand accounts protected from any accountability by the anonymity shield offered by all social media platforms. Remove the shield to create transparency.

    Stop hosting providers of disinformation. Systematically and aggressively take down posts and tweets that contain false information, no matter the topic and the origin. Suspend any account that posts proven false information more than 3 times despite warnings.

Limit the reach

While freedom of speech is likely to remain an absolute in the U.S. (Europe introduced exceptions to it a long time ago), a more realistic way of decreasing disinformation would be to limit the freedom of reach.

    Deactivate microtargeting options for any political content. The ability to create millions of variations of an ad and to show it only to a small group of people has opened the door to untraceable manipulation of voters.

    Follow “slow design” principles. Websites in general and social media platforms in particular have been optimized to increase speed and volume of engagement. Introducing frictions, like the ones that Twitter started implementing a few weeks ago, expanding warning labels to make users pause and think about what they’re sharing and with whom would limit the creation and rapid spread of harmful misinformation.

    Implement virality circuit breakers. In 2018, an MIT study found that misinformation moved six times faster than the truth on Twitter and that falsehoods were 70% more likely to be retweeted than the truth. Any news spreading too rapidly would be flagged and deprioritized in newsfeeds and would disappear from trending topics lists and other algorithmically promoted avenues while undergoing a thorough priority fact-check.

    Stop autoplay. Algorithms that allow social media platforms to automatically push the next related video have been shown to be radicalizing forces for users.

    Stop promoting private groups until results are announced. Sixty-four percent of all extremist group joins are due to Facebook’s recommendation tools. A major rethink of the way private groups are being promoted and monitored is necessary so that people are not pushed into extremist groups.

There is so much more that can be done that won’t affect the upcoming elections but will, in the long term, contribute to decrease gangrenous misinformation: Further invest in automated vetting systems, develop crowdsourced fact-checking, implement “scan-and-suggest” features, introduce more context, systematically correct the record, open datasets and algorithms to the scrutiny of researchers and NGOs, properly pay traditional media for high-quality content, etc. Beyond all of these measures, the most important action social media platforms can take is to push back on the noxious notion that facts and truths are political matters: Stop pretending to be neutral bystanders and stand up for truth.


Maelle Gavet has worked in technology for 15 years. She served as CEO of Ozon, an executive vice president at Priceline Group, and chief operating officer of Compass. She is the author of Trampled by Unicorns: Big Tech’s Empathy Problem and How to Fix It.

 

 

Fast Company , Read Full Story

(62)