Europe’s Gaza misinformation crackdown could set a dangerous precedent

 

By Issie Lapowsky

Misinformation has continued to spread rapidly online in the 10 days since the outbreak of the Israel-Hamas war, with doctored images and mislabeled videos spreading false claims about everything from the nature of the attacks to the extent of U.S. aid to Israel.

Almost immediately after the attack by Hamas, the European Commission responded to the surge in false information by issuing a series of stern warnings to major tech companies, including Meta, X, TikTok, and YouTube, saying their platforms are being used to disseminate “illegal content and disinformation” and urging “mitigation measures” to prevent any further harms caused by such content. The goal, according to the letters—which European Commissioner Thierry Breton posted on X—is to ensure social media companies are complying with their duties under Europe’s newly enacted Digital Services Act (DSA), a sweeping piece of legislation that imposes new content moderation obligations on social media platforms operating in the EU. 

But experts on the DSA argue that by publicly pressuring platforms to remove what it deems to be “disinformation,” the Commission risks crossing a line into the very kind of censorship that the legislation was crafted to avoid. 

“This is a huge PR fail for the Commission,” says Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center. “They want to evangelize the DSA as a model, and Breton is instead showing the world what looks like huge potential for abuse.”

In the letters, Breton stops short of explicitly demanding that certain pieces of legal content be removed, and in a statement on X, he wrote, “The #DSA is here to protect both freedom of expression & our democracies—including in times of crisis.” And yet, experts worry about the precedent that Breton’s very public and confrontational chastisement of these platforms is likely to set. “They won’t be able to have an open and frank dialogue about what they think works and what doesn’t work and why,” says Zach Meyers, senior research fellow at The Centre for European Reform, a European think tank. “They’re going to be really nervous about the fact that certain commissioners think that they can score some political points by making it public and taking an aggressive approach.” 

The DSA intentionally walks a fine line to avoid calling for the outright suppression of legal speech, a possibility member states and civil liberties advocates worried about early on. “There was a real concern from a number of countries that it would turn into a censor’s charter, and it would create a whole lot of murky, gray area where you’d have content that isn’t illegal, but it is illegal if you put it on social media,” Meyers says.

Instead, the DSA requires so-called very large online platforms with more than 45 million monthly active users to remove explicitly illegal content—like terrorist propaganda or child sexual abuse material—and also to take steps to mitigate the risks of other potentially harmful forms of speech, like disinformation, election interference, and harm to minors. But the law doesn’t explicitly outline how platforms should mitigate those risks and leaves open the possibility that mitigation will look different depending on the platform. To keep social media companies accountable, the DSA requires that they submit to a risk assessment and annual independent audits. In this way, the law focuses on whether companies have the right processes in place, not whether they take down a given piece of problematic content. 

 

“The DSA has a bunch of careful, procedurally specific ways that the Commission or other authorities can tell platforms what to do. That includes ‘mitigating harms,’” Keller says. The problem with Breton’s letters, she argues, is that they “blow right past all that careful drafting, seeming to assume exactly the kind of unconstrained state authority that many critics in the Global South warned about while the DSA was being drafted.”

While the letters do draw platforms’ attention to the proliferation of illegal content, they also point to the existence of content that is not illegal, but is merely false or misleading. In his letters to X and TikTok, for instance, Breton cited reports of “fake and manipulated images and facts” circulating on the platform as cause for concern about the companies’ DSA compliance. In the case of X, the Commission has sent a formal request for information to the company as part of what Breton described as “a first step in our investigation.” 

In an email to Fast Company, a Commission spokesperson reiterated that both the DSA and the EU’s Terrorist Content Online Regulation require platforms to remove content that can be associated with Hamas or that incite violence or glorify terrorist offenses. But the spokesperson added, “Nothing in the DSA obliges online intermediaries to remove lawful content. On the contrary, the Digital Services Act will end the current situation in which online intermediaries are not subject to any minimum standards or held accountable for their content moderation practices.”

Still, Ashkhen Kazaryan, senior fellow of free speech and peace at the nonprofit Stand Together, objects to the implication in these letters that the mere existence of harmful, but legal, content suggests companies aren’t living up to their obligations under the DSA. After all, there are other interventions, including warning labels and reducing the reach of content, that platforms may be using rather than removing content altogether. Particularly in times of war, Kazaryan, who is a former content policy manager for Meta, says these alternative interventions can be crucial in preserving evidence to be used later on by researchers and international tribunals. “The preservation of [material] is important, especially for things like actually verifying it,” Kazaryan says, pointing to instances where evidence of Syrian human rights offenses have been deleted en masse.

Of course, it’s important to note that none of this means platforms’ response to the Israeli-Hamas war has been perfect—or even adequate. And the particularities of one government’s implementation of an online speech law pales in comparison to the gravity of the violence and destruction that has already taken place in Israel and Gaza. But given that the DSA went into effect in August, this war is, in some ways, the first major test of a law that critics feared could be used as an instrument of censorship. Even some who supported the law are now concerned the government is failing that test.  “I am hearing ‘I told you so’ messages from those critics,” Keller says. “As someone who is usually a DSA defender, this makes me feel like a chump.”

Fast Company

(20)