Meta’s Oversight Board raises concerns over automated moderation of hate speech
Meta’s Oversight Board raises concerns over automated moderation of hate speech
The company automatically rejected an appeal after it failed to remove Holocaust denial post from Instagram.
Meta’s Oversight Board has raised concerns over automated moderation while overturning a decision by the company to leave a Holocaust denial post on Instagram. Holocaust denial is deemed hate speech under Meta’s policies. The post in question depicted Squidward from SpongeBob Squarepants and purported to include true facts about the Holocaust. However, the claims “were either blatantly untrue or misrepresented historical facts,” the Oversight Board said.
Users reported the post six times after it first appeared in September 2020, but in four instances Meta’s systems either determined that the content didn’t violate the rules or they automatically closed the case. In early 2020 as the COVID-19 pandemic took hold, Meta started automatically closing content reviews to lessen the workload for human reviewers and free up bandwidth for manual review of high-risk reports. All the same, two of the Squidward post reports were also deemed non-violating by human reviewers.
Last May, one user lodged an appeal against Meta’s decision to leave the offending content on Instagram. But this appeal was again closed automatically by Meta due to its COVID-19 automation policies, according to the Oversight Board. The user then appealed to the board, which took up the case.
The board conducted an assessment of Holocaust denial content across Meta’s platforms and it found that the Squidward meme was used to spread various types of antisemitic narratives. It notes that some users attempt to evade detection and continue to spread Holocaust denial content by using alternate spellings of words (such as replacing letters with symbols) and using cartoons and memes.
The Oversight Board said it’s concerned that Meta continued to employ its COVID-19 automation policies as of last May, “long after circumstances reasonably justified them.” It also cited unease over “the effectiveness and accuracy of Meta’s moderation systems in removing Holocaust denial content from its platforms.” It notes that human reviewers can’t granularly label offending content as “Holocaust denial” (such posts are filtered into a “hate speech” bucket). The board also wants to know more about the company’s ability to “prioritize accurate enforcement of hate speech at a granular policy level” as it leans more heavily on AI for content moderation.
The board recommended that Meta “take technical steps” to make sure it systematically and sufficiently measures how accurate it is in enforcing Holocaust denial content. That includes gathering more granular information. The board also asked Meta to confirm publicly whether it has ceased all COVID-19 automation policies it established during the onset of the pandemic.
When asked for comment, Meta directed Engadget to its formal response to the board’s decision on its transparency site. The company agrees that it left the offending post on Instagram in error and, at the time the board took up the case, Meta said it had removed the content. Following the board’s case decision, Meta says it will “initiate a review of identical content with parallel context. If we determine that we have the technical and operational capacity to take action on that content as well, we will do so promptly.” It plans to review the board’s other recommendations and issue an update later.
(12)