Elon Musk’s Kanye West fiasco highlights the content moderation dangers Twitter now faces
The train wreck that is Twitter under Elon Musk continues to see more dysfunction seemingly every day. In the past 24 hours, Musk has put controversial anti-Semite Kanye West into temporary Twitter jail for tweeting an image of a swastika inside a Star of David—something West did hours after praising Hitler during an interview with conspiracy theorist Alex Jones.
Now, Musk has to face up to new data published by a collection of organizations tracking hate speech on digital platforms including the Center for Countering Digital Hate (CCDH) and the Anti-Defamation League (ADL) that shows the increase in hate speech on Twitter has never been worse.
“Elon Musk has again been exposed as misleading users and advertisers, claiming ‘mission accomplished’ despite his clear failure to meet his own self-proclaimed standards to clamp down on vicious bigotry,” says Imran Ahmed, CEO of the CCDH.
This is all happening despite Musk’s claim last month that the number of impressions (or eyeballs) on hate speech has dropped below a pre-takeover baseline. “Fun level on Twitter has definitely increased!” he wrote on November 23. “I’m having a great time tbh.”
Using impressions is an incorrect, misleading metric, suggests Ahmed. The CCDH’s data, collected by analyzing the number of tweets containing slurs, shows that almost all hate speech phrases have seen a “Musk bump.” Before the billionaire took over the company in late October, there were 1,282 mentions of the N-word every day on Twitter. During the week he tweeted about an elevated “fun level,” there were 4,650 such mentions. The average number of times the slur has been used a day since he assumed control at the social network is 3,876: a 202% increase. (Twitter did not respond to a request for comment.)
The CCDH also analyzed the degree to which such hateful content is engaged (i.e. likes and retweets), finding that the average number of engagements on tweets containing slurs has skyrocketed from 13.3 pre-takeover to 49.5 post-takeover.
“It’s pretty clear we’re seeing in real time that Elon Musk is learning the lessons that those in the trust and safety field have been well aware of for the last decade-plus,” says Rebekah Tromble, director of the Institute for Data, Democracy & Politics at George Washington University. “It’s simply not the case that you can open things up on a platform and declare it’s free speech and not have any real-world consequences.”
Of course, hate speech doesn’t exist in an online vacuum; it has real world consequences. “If you see things more frequently, you believe them to be normal,” Ahmed says. “That’s literally the process of normalization. That has a resocializing effect on society. The red lines and the consequences we impose on the limits of speech are how we have in part made a world where you can’t just call black people the N-word, and people like me anti-Muslim slurs.”
That’s something George Washington University’s Tromble agrees with. “Too many people act as if what is said online has different effects from speech said in offline forums,” she says. “That’s simply not the case. Anti-Semitic statements and rhetoric are just as dangerous online as they are anywhere else. They can incite and inspire violence at any moment. He has not given the impression he understands that yet.” Sure enough, research suggests there is a link between online hate speech and offline hatred.
Ahmed also believes that Musk is singularly unsuited to the task of content moderation—which, following mass layoffs, he appears to be trying to oversee himself, with a drastically reduced team of moderators. “He hasn’t come up with an engineering solution to it because there is no engineering solution to the moral question of where you place the red lines on your platform. He hasn’t come up with a civil society-based solution. He said he was going to set up a council: he didn’t. He lied.”
One of the key issues is that Musk seems to have believed he had simple, elegant solutions to intractable problems that experts in content moderation have been wrestling with for years. “Elon Musk undervalued a lot of things going on in Twitter internally,” says Jeremy Blackburn, assistant professor at Binghamton University’s Computer Science Department, and a member of the iDRAMALab, which tracks hate speech online. “He undervalued the engineering staff, but definitely undervalued the content moderation team, and content moderation in general.”
As for West, Ahmed believes that Musk was naïve to think that giving the rapper a platform on Twitter was going to end well. “It’s the old tale of the frog and the scorpion. What did you expect the scorpion to do? He’s going to sting,” he says. “For Musk to go: ‘I tried my best and there’s nothing I can do,’ well, what did you think people did before?”
(43)