Big Tech CEOs face Congress: Here’s how Facebook, Twitter, and Google say they’re fighting extremism
By Connie Lin
March 25, 2021
Under the spotlight again, the CEOs of Facebook, Twitter, and Google are testifying in Congress today about how their social media behemoths are combating the types of extremist content that lead to misinformation campaigns, such as the theory that COVID-19 is a hoax or that the 2020 presidential election was rigged.
Such campaigns have birthed widespread skepticism against the COVID-19 vaccine and, in January, an insurrectionist uprising on Capitol Hill when a far-right mob attempted to block the certification of democratic votes.
The hearing, entitled “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation,” began live-streaming at 12 p.m. Mark Zuckerberg of Facebook, Jack Dorsey of Twitter, and Sundar Pichai of Google all submitted written testimonies ahead of the conference.
Here’s some of what they’re saying:
Mark Zuckerberg:
“The vast majority of what people see on Facebook is neither political nor hateful. Political posts make up only about 6 percent of what people in the United States see in their News Feed, and the prevalence of hateful content people see on our service is less than 0.08 percent.”
“We work with 80 independent third-party fact-checkers certified through the International Fact-Checking Network . . . If content is rated false, we put a warning label on it [and] significantly reduce its distribution. This cuts future views by more than 80%.”
People who “liked, commented on, or reacted to posts with Covid-19 misinformation that we later removed for violating our policy . . . will see a thumbnail of the post and more information about where they saw it, how they engaged with it, why it was false, and why we removed it.”
“To date, we have banned over 250 white supremacist groups and 890 militarized social movements . . . We have also continued to enforce our ban on hate groups, including the Proud Boys and many others.”
Jack Dorsey:
“Content moderation in isolation is not scalable, and simply removing content fails to meet the challenges of the modern Internet. This is why we are investing in two experiments—Birdwatch and Bluesky.”
“In January, we launched the “Birdwatch” pilot, a community-based approach to misinformation. Birdwatch is expected to broaden the range of voices involved in tackling misinformation, and streamline the real-time feedback people already add to Tweets.”
“Twitter is also funding Bluesky, an independent team of open source architects, engineers, and designers, to develop open and decentralized standards for social media . . . Bluesky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms . . . These standards will support innovation, making it easier for startups to address issues like abuse and hate speech at a lower cost.”
Sundar Pichai:
“Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act . . . Without Section 230, platforms would either over-filter content or not be able to filter content at all. Recent proposals to change Section 230—including calls to repeal it altogether—would not serve that objective well. In fact, they would have unintended consequences—harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.”
(50)