British Tech Companies and Child Protection Officials to Test AI's Capability to Generate Exploitation Images

Tech firms and child safety organizations will be granted permission to evaluate whether artificial intelligence tools can generate child exploitation material under recently introduced British legislation.

Significant Rise in AI-Generated Illegal Material

The announcement came as revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the amendments, the authorities will permit designated AI companies and child protection groups to inspect AI systems – the underlying systems for chatbots and visual AI tools – and verify they have sufficient safeguards to stop them from producing images of child sexual abuse.

"Fundamentally about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous conditions, can now detect the danger in AI systems early."

Addressing Legal Challenges

The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI creators and others cannot create such content as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that issue by helping to halt the production of those images at their origin.

Legislative Framework

The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a prohibition on owning, creating or sharing AI models designed to generate child sexual abuse material.

Practical Consequences

This week, the minister visited the London headquarters of Childline and listened to a mock-up call to counsellors involving a account of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.

"When I hear about children facing blackmail online, it is a cause of intense frustration in me and rightful anger amongst families," he said.

Concerning Statistics

A prominent internet monitoring organization reported that cases of AI-generated abuse material – such as online pages that may contain multiple files – had significantly increased so far this year.

Instances of the most severe material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a vital step to ensure AI tools are safe before they are released," stated the head of the online safety organization.

"AI tools have made it so victims can be victimised repeatedly with just a simple actions, providing criminals the ability to create potentially endless amounts of advanced, photorealistic exploitative content," she added. "Content which additionally exploits victims' suffering, and makes children, particularly female children, less safe both online and offline."

Counseling Session Information

Childline also published details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations include:

  • Using AI to evaluate weight, physique and appearance
  • AI assistants dissuading children from talking to safe guardians about abuse
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-manipulated images

During April and September this year, Childline conducted 367 support interactions where AI, chatbots and related terms were mentioned, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were connected with mental health and wellness, encompassing using chatbots for assistance and AI therapeutic applications.

Michael Cox
Michael Cox

A passionate fashion enthusiast and writer, sharing insights on style and self-expression.