UK Technology Firms and Child Safety Agencies to Examine AI's Ability to Create Abuse Content

Tech firms and child protection agencies will receive permission to assess whether AI tools can generate child exploitation images under recently introduced UK legislation.

Significant Increase in AI-Generated Harmful Material

The announcement came as revelations from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the authorities will allow designated AI companies and child protection organizations to inspect AI systems – the foundational technology for chatbots and visual AI tools – and verify they have adequate protective measures to stop them from producing depictions of child exploitation.

"Ultimately about preventing exploitation before it happens," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the risk in AI systems early."

Addressing Regulatory Obstacles

The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that issue by enabling to halt the creation of those materials at their origin.

Legal Structure

The amendments are being added by the authorities as revisions to the criminal justice legislation, which is also establishing a ban on possessing, producing or sharing AI systems developed to generate child sexual abuse material.

Real-World Consequences

This recently, the minister toured the London headquarters of a children's helpline and heard a mock-up conversation to counsellors featuring a account of AI-based abuse. The call depicted a teenager requesting help after being blackmailed using a sexualised deepfake of himself, created using AI.

"When I learn about young people facing extortion online, it is a source of extreme frustration in me and rightful concern amongst parents," he said.

Alarming Data

A leading online safety foundation reported that instances of AI-generated abuse material – such as webpages that may contain numerous images – had significantly increased so far this year.

Cases of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are launched," commented the head of the internet monitoring organization.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a simple actions, giving offenders the ability to create potentially endless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Material which further commodifies survivors' suffering, and makes children, particularly female children, less safe both online and offline."

Counseling Session Data

Childline also published information of support sessions where AI has been referenced. AI-related risks mentioned in the conversations include:

  • Using AI to rate body size, physique and appearance
  • Chatbots dissuading young people from talking to trusted adults about harm
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked pictures

During April and September this year, Childline conducted 367 support sessions where AI, conversational AI and related topics were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including using AI assistants for support and AI therapeutic applications.

Amy George
Amy George

Elara is a passionate astrophysicist and science writer, dedicated to making complex space topics accessible and exciting for all readers.