Google Reveals AI Model Gemini Used To Create Deepfakes And Child Abuse Content, Australia Reacts..news24 | News 24
Dark Mode Light Mode
Dark Mode Light Mode

Google reveals AI model Gemini used to create deepfakes and child abuse content, Australia reacts..news24

Google has disclosed that it received more than 250 complaints worldwide over a period of nearly a year regarding its artificial intelligence software being misused to generate deepfake terrorism material, according to the Australian eSafety Commission.

The technology giant, owned by Alphabet, also reported receiving dozens of user submissions alleging that its AI model, Gemini, had been exploited to create child abuse content. This revelation emerged as part of Google’s mandatory compliance with Australian law, which requires technology firms to periodically submit reports detailing their harm prevention measures, or risk financial penalties. The reporting period in question spanned from April 2023 to February 2024.

Notably, the Australian eSafety Commission described Google’s disclosure as a “world-first insight” into the ways AI technology is potentially being misused to create illegal and harmful content. eSafety Commissioner Julie Inman Grant emphasised the importance of proactive safety measures in AI development.

“This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated,” Inman Grant said in a statement.

According to Google’s report, the company received 258 user complaints about suspected AI-generated deepfake terrorist or violent extremist content created with Gemini. Additionally, 86 reports alleged the production of AI-generated child exploitation or abuse material.

While Google did not specify how many of these reports were substantiated, it confirmed that it employed hash-matching technology—a method of automatically detecting and removing known child abuse images—to address AI-generated child exploitation material. However, the company did not apply the same approach to detecting AI-generated terrorist or extremist content, the regulator noted.

The Australian eSafety Commission has previously imposed fines on platforms such as Telegram and X, for failing to meet reporting requirements. X has already lost one appeal against its A$610,500 (£310,000) fine but intends to challenge the ruling further. Telegram has also indicated plans to contest its penalty.

(With inputs from Reuters)

Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Natasha Jonas faces Lauren Price at final press conference: 'I'm going to prove myself right' | Boxing Newsnews24

Next Post

China's patience with Donald Trump is running out - as trade war rhetoric ramps up | World Newsnews24