Elon Musk’s Twitter will create a toxic ripple effect across social media

By Katie A. Paul

Much has been written about Elon Musk’s reckless moves to dismantle the system Twitter built to keep its platform safe, threatening to turn it into precisely the kind of “hellscape” that he vowed to avoid. But the concern actually stretches further, affecting much larger platforms with a larger influence on the public debate.

For years, Twitter has played an outsize role in setting the standards for social media as a whole, creating competition among tech companies to make their platforms safe. Twitter is far smaller than other tech companies; Facebook CEO Mark Zuckerberg once boasted that his company spends more on safety than Twitter’s annual revenue. But the platform has often been the first mover on significant policy changes with real-world impacts, effectively creating a floor for the industry as a whole.

By taking the first step, Twitter put pressure on other, larger platforms like Google’s YouTube and Meta’s Facebook to follow suit and provided them with critical political cover to make policy changes that were contentious with those affected. For example, Twitter led the way when it cracked down on hundreds of thousands of accounts pushing the QAnon conspiracy theory in July 2020, saying it had the “potential to lead to offline harm.” Facebook announced similar action in August of that year, followed by YouTube.

Twitter was the first to fact-check Donald Trump, attaching context labels to the then-president’s misleading claims about mail-in ballots. Zuckerberg initially bristled at the idea of fact-checking Trump, but he later reversed course amid intense criticism. Twitter also led the way on targeting COVID-19 vaccine misinformation with labels and a strike system. Weeks later, Facebook announced a similar labeling system, and YouTube eventually followed suit as well.

After Facebook and Twitter temporarily suspended Trump, following the January 6 insurrection, Twitter announced on January 8 that it would permanently ban Trump. YouTube suspended Trump a few days later. Facebook, which initially said it would bar Trump until the end of his term on January 20, 2021, ultimately decided in June 2021 that the suspension would last at least two years.

Twitter wasn’t always effective in enforcing its policy changes, and many users faced hate speech and harassment in the pre-Musk era. But the company nevertheless wrestled with the major issues and often set the standard in how the platforms should act on big, politically fraught issues. With Twitter now reversing course under Musk, it’s unclear if Facebook and Google will feel the pressure to make those tough decisions on their own.

The outlines of Musk’s agenda at Twitter have become increasingly clear in recent weeks, including his intention to reinstate banned accounts, something he has already done with Trump. (The former president says he has no interest in returning to Twitter and plans to stick with his Truth Social platform, though it’s not clear how long that will last.) Musk, who previously criticized Twitter as having a “strong left-wing bias,” has also shredded any sense that Twitter is a politically neutral platform, urging Americans to vote for a Republican Congress and tweeting that he would back Ron DeSantis for president in 2024. He’s also undoing the platform’s efforts to combat dangerous COVID-19 misinformation, announcing that the company will no longer be enforcing those policies.

These moves—combined with reports that racist troll activity has surged on Twitter since Musk took the helm—make it unlikely that Twitter will continue to be a leading voice on content moderation. In fact, Twitter appears to be headed in the opposite direction entirely, becoming a new model of an anything-goes platform like 4Chan, regardless of the real-world impacts.

Will the other major platforms follow suit? Recent signals from Facebook are not encouraging. The company, which is set to reconsider its suspension of Trump as soon as January, is reported to have stopped fact-checking the former president following his announcement of a new run for the White House. Meanwhile, Facebook parent company Meta, like other tech giants, laid off thousands of employees, including ones involved with research and integrity.

Without Twitter prodding them to make tough content decisions and absorbing the initial blowback, Facebook and YouTube may be more hesitant to make policy changes that require backbone. That could have a serious impact on the social media landscape, with the companies less willing to tackle misinformation, conspiracy theories, and political violence that take root on their platforms.


Katie A. Paul is the director of the Tech Transparency Project (TTP), where she specializes in tracking extremism, disinformation, and criminal activity on online platforms, such as Facebook. Paul also serves as codirector and cofounder of the Antiquities Trafficking and Heritage Anthropology Research (ATHAR) Project and is a founding member of the Alliance to Counter Crime Online (ACCO).

Fast Company

(28)