Google’s Jigsaw Develops Free Terrorism Moderation Tool For Small Websites, Publishers
Google’s Jigsaw Develops Free Terrorism Moderation Tool For Small Websites, Publishers
Advertisers trust publishers to rid websites of terrorism-related content. Technology companies like Google and Meta have been working to build tools to make that happen, especially for smaller companies.
Working through its business unit Jigsaw, and with the UN-backed initiative Tech Against Terrorism, which helps tech companies police online terrorism, Google has developed a free moderation tool for smaller websites. The companies say the technology can identify and remove terrorism-related material.
The move comes as U.S. government legislation, as well as the U.K. and the European Union, are prompting companies to do more to rid the internet of what they deem to be illegal content with themes of terrorism, extremism and violence.
“There are a lot of websites that just don’t have any people to do the enforcement,” Yasmin Green, CEO at Jigsaw, told The Financial Times. “It is a really labor-intensive thing to even build the algorithms [and] then you need all those human reviewers.”
Green told the The Financial Times that Google wants a healthier internet, but in actuality, more broadly the world needs a healthier internet.
Jigsaw’s project is supported by GIFCT, a non-governmental organization founded in 2017 by Facebook, Microsoft, Twitter and YouTube to foster partnerships between tech platforms, according to The Financial Times. GIFCT maintains a database of terrorist content shared among members that companies can use to moderate systems to detect existing material of this nature.
Meta, the parent company of Facebook, in December launched a content moderation tool. Its developers created Hasher-Matcher-Actioner (HMA) Trust & Safety Platform, an open-source tool that it claims can fight terrorist and violent extremist content online.
The move comes after the company in August removed hundreds of Facebook and Instagram accounts associated with Proud Boys for violating the ban on the group.
HMA can identify suspicious content such as copies of images or videos that violate certain guidelines and have been flagged by users as inappropriate.
(15)