Google is making it easier to remove sexually explicit deepfakes from search

July 31, 2024

Google is making it easier to remove sexually explicit deepfakes from search

The tech giant has also changed its Search ranking system to lower explicit deepfake content in general.

BY Chris Morris

Google is joining the growing number of companies standing up to sexually explicit deepfakes.

The Alphabet division has made it easier for users to report non-consensual imagery found in search results, including those made by artificial intelligence tools. While it was previously possible for users to request the removal of these images prior to the update, under the new policy whenever that request is granted, the company will scan for duplicates of the non-consensual image and remove those as well. Google will also attempt to filter all explicit results on similar searches.

“With every new technology advancement, there are new opportunities to help people — but also new forms of abuse that we need to combat,” product manager Emma Higham wrote in a blog post. “As generative imagery technology has continued to improve in recent years, there has been a concerning increase in generated images and videos that portray people in sexually explicit contexts, distributed on the web without their consent.”

Google has also changed its ranking system, lowering explicit deepfake content in general. Even direct searches for explicit deepfakes will bypass the user request and instead return “high-quality, non-explicit content — like relevant news articles — when it’s available” the company wrote.

Websites that have a high number of pages removed from search under these policies will be demoted in the search algorithm as well, making them much more difficult for anyone to find. Google says this approach has worked well for other types of harmful content.

Google’s change to its search engine comes just one day after Microsoft called on Congress to create a “deepfake fraud statute” to combat AI fraud in both images and voice replication, and about one week after Meta’s oversight board said the social media giant fell short in its response to a pair of high-profile explicit, AI-generated images of female public figures on its sites.

The U.S. government has taken a number of steps to curb deepfakes already. Recently, the Senate passed a bill that would allow victims of sexually explicit deepfaked images to sue the creator of those for damages. And the FCC has banned robocalls with AI-generated voices, which have been on the increase over the past year, especially in the political arena.

They continue to propagate, however, and Google acknowledged that even with today’s changes to its Search policy, they will continue to pop up.

“There’s more work to do to address this issue, and we’ll keep developing new solutions to help people affected by this content,” Higham wrote in the blog post. “And given that this challenge goes beyond search engines, we’ll continue investing in industry-wide partnerships and expert engagement to tackle it as a society.

 

 


ABOUT THE AUTHOR

Chris Morris is a contributing writer at Fast Company, covering business, technology, and entertainment, helping readers make sense of complex moves in the world of tech and finance and offering behind the scenes looks at everything from theme parks to the video game industry. Chris is a veteran journalist with more than 35 years of experience, more than half of which were spent with some of the Internet’s biggest sites, including CNNMoney.com, where he was director of content development, and Yahoo! Finance, where he was managing editor 


 

    Fast Company

    (8)