ButSpeak.com
News which Matters.
Google upgrades safety features to automatically filter and demote deepfake content in search results, aiming to reduce exposure and improve user safety.
Google is enhancing its safety features to simplify the process of removing deepfakes from search results while preventing these harmful images from ranking highly in searches. This upgrade aims to make it easier for users to request the removal of explicit deepfakes and will automatically filter out related search results and report similar or duplicate images.
In addition to this, Google will lower the search rankings of websites that frequently host AI-generated deepfake content. Emma Higham, Google’s product manager, noted the success of similar strategies for other types of harmful content. “Our testing shows that this approach will be a valuable way to reduce fake explicit content in search results,” Higham said.
The tech giant shared that previous updates have already decreased exposure to explicit image results for deepfake-related queries by over 70% this year. Higham emphasized that the goal is to allow people to read about the societal impacts of deepfakes without encountering non-consensual fake images.
This initiative builds on Google’s previous efforts to curb harmful online content. In May, Google started removing advertisers promoting deepfake porn services. The company also expanded the types of doxxing content eligible for removal in 2022 and began blurring sexually explicit images by default in August 2023.
Non-consensual AI deepfakes have become a growing concern for tech companies. Recently, Meta faced scrutiny from its Oversight Board for inadequately handling sexually explicit deepfakes of real women.
Google’s latest safety feature enhancements represent a significant step in addressing the challenges posed by deepfake content, aiming to create a safer and more informative online environment for users.