Undercover video shows Facebook loathe to delete toxic content
An undercover investigation has shed new light into the often hidden process by which Facebook removes hateful or violent content from its platform. In some cases, according to the report, it takes a lot for the company’s moderators to remove even the most toxic posts or pages–especially if they are popular.
The investigation aired on Tuesday by Britain’s Channel 4 centers on a Dublin-based content moderation firm called CPL Resources, which Facebook has used as its main U.K. content moderation center since 2010. An investigative reporter got a job there, revealing how CPL’s content moderators decide to remove content reported by users to be hateful or harmful.
This graphic video depicting a child abuser beating a young boy was left on Facebook for several years, despite requests to have it taken down.#Dispatches went undercover to investigate why the social media network is leaving extreme content on its site.
WATCH NOW @Channel4. pic.twitter.com/3Ft6H5d64q
— Channel 4 Dispatches (@C4Dispatches) July 17, 2018
Perhaps the most telling are the revelations about how Facebook polices racist or hateful political speech on the platform. This, after all, is the stuff that was used at massive scale to influence both the 2016 presidential election and the U.K.’s Brexit vote.
Normally, if a given page posts five pieces of content that violate Facebook’s rules in a 90-day period, that page is removed, a policy described in documents recently seen by Motherboard. YouTube, by comparison, allows user pages only three strikes in 90 days before deletion.
However, if the Facebook page happens to be a big traffic generator, moderators use a different procedure. CPL is required to put these pages in a queue so that Facebook itself can decide whether or not to ban them. The investigation found that pages belonging to far-right groups with large numbers of followers were allowed to post higher-than-normal numbers of hateful posts, and were moderated in the same way as pages belonging to governments and news organizations.
One post contained a meme suggesting a girl whose “first crush is a little negro boy” should have her head held under water. Despite numerous complaints, the post was left on the site.
One CPL moderator told the undercover reporter that the far-right group Britain First’s pages were left up despite repeatedly featuring content that breached Facebook’s guidelines because “they have a lot of followers so they’re generating a lot of revenue for Facebook.” Facebook confirmed to the producers that they do have special procedures for popular and high-profile pages, including Britain First.
CPL trainers instructed moderators to ignore hate speech toward ethnic and religious immigrants, and to ignore racist content. “[I]f you start censoring too much then people lose interest in the platform . . . It’s all about making money at the end of the day,” one CPL moderator told the undercover reporter.
On Wednesday, Denis Naughten, Ireland’s Communications Minister, said he had requested a meeting with Facebook management over the “serious questions” raised by the exposé, and that company officials would meet with him on Thursday in New York, where he is attending a UN meeting.
“Clearly Facebook has failed to meet the standards the public rightly expects of it,” he said in a statement.
Dispatches reveals the racist meme that Facebook moderators used as an example of acceptable content to leave on their platform.
Facebook have removed the content since Channel 4’s revelations.
Warning: distressing content. pic.twitter.com/riVka6LcPS— Channel 4 Dispatches (@C4Dispatches) July 17, 2018
Facebook’s complex moderation system–peopled by thousands of employees and contractors behind closed doors in offices around the world–has become an increasing focus of European reporters amid growing scrutiny by regulators. A series in the Guardian last year exposed a trove of the company’s content policies, which some moderators cited for their “inconsistency and peculiar nature.”
“The crack cocaine of their product”
One of Facebook’s earliest investors, Roger McNamee, told Channel 4 that Facebook’s business model relies on extreme content.
“From Facebook’s point of view this is, this is just essentially, you know, the crack cocaine of their product, right? It’s the really extreme, really dangerous form of content that attracts the most highly engaged people on the platform. Facebook understood that it was desirable to have people spend more time on site if you’re going to have an advertising-based business, you need them to see the ads so you want them to spend more time on the site. Facebook has learned that the people on the extremes are the really valuable ones because one person on either extreme can often provoke 50 or 100 other people and so they want as much extreme content as they can get.”
(McNamee was a mentor to CEO Mark Zuckerberg, and recruited Sheryl Sandberg to the company from Google to develop Facebook’s massive advertising business.)
This is what makes the Channel 4 exposé so remarkable. It suggests that Facebook not only hosted lots of racially and socially charged political content during events like Brexit and the 2016 presidential election, but that its leadership was aware of the share-ability of that content, and of the ad impressions it meant.
A Facebook representative took issue with McNamee’s assertion.
“Shocking content does not make us more money, that’s just a misunderstanding of how the system works,” he told Channel 4. “People come to Facebook for a safe secure experience to share content with their family and friends.” The spokesperson offered no numbers to reinforce his claim.
The Silicon Valley giant responded to the investigation in a blog post on Tuesday, saying it was retraining its moderation trainers and fixing other “mistakes.” In an interview with Channel 4, Facebook vice president of global policy Richard Allen described efforts the company was taking and apologized for the “weaknesses” the broadcaster had identified in the platform’s moderation system.
Separately on Wednesday, Facebook addressed growing criticism about the role of its platform in inciting deadly mob violence in some countries, telling reporters that it would begin to remove misinformation from Facebook that leads to physical harm.
“We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline. We have a broader responsibility to not just reduce that type of content but remove it,” Tessa Lyons, a Facebook product manager, told the New York Times. The new policy does not apply to Instagram or WhatsApp, which has also been implicated in spreading dangerous rumors.
In an interview with Recode published on Wednesday, CEO Mark Zuckerberg offered a head-scratching explanation for why Facebook should permit certain content. “I just don’t think that it is the right thing to say we are going to take someone off the platform if they get things wrong, even multiple times,” he told Recode‘s Kara Swisher.
In April an undercover video aired by Channel 4 helped expose the sometimes incendiary methods by which Cambridge Analytica tried to influence voters, including by using Facebook. The broadcaster’s new revelations help shed more light on the platform’s role in that equation, and reinforce the suspicion that when it comes to toxic but popular content, Facebook prefers to look the other way. Deleting social media content, be it fake news or hate speech or other terrible things, is a messy business. But from Facebook’s point of view, deleting content can look like bad business, too.
Fast Company , Read Full Story
(8)