British Technology Firms and Child Safety Agencies to Test AI's Ability to Create Abuse Content
Technology companies and child protection organizations will receive permission to evaluate whether artificial intelligence systems can generate child abuse images under new UK laws.
Significant Rise in AI-Generated Harmful Material
The declaration coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will permit designated AI companies and child safety groups to examine AI systems – the foundational technology for chatbots and visual AI tools – and verify they have sufficient protective measures to stop them from creating depictions of child sexual abuse.
"Fundamentally about preventing abuse before it occurs," declared the minister for AI and online safety, adding: "Specialists, under strict conditions, can now identify the danger in AI systems promptly."
Tackling Legal Challenges
The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a testing process. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.
This law is aimed at preventing that problem by enabling to stop the production of those images at source.
Legislative Framework
The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, producing or distributing AI systems designed to create child sexual abuse material.
Practical Consequences
This week, the official visited the London headquarters of Childline and heard a simulated call to counsellors involving a report of AI-based exploitation. The interaction depicted a adolescent seeking help after being blackmailed using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about children facing extortion online, it is a cause of intense frustration in me and rightful anger amongst families," he said.
Alarming Statistics
A prominent online safety organization reported that cases of AI-generated exploitation material – such as webpages that may contain numerous images – had more than doubled so far this year.
Cases of category A material – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI images in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are launched," commented the chief executive of the internet monitoring organization.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a few clicks, providing criminals the ability to create possibly limitless amounts of advanced, photorealistic child sexual abuse material," she added. "Material which additionally commodifies survivors' trauma, and makes children, particularly girls, more vulnerable both online and offline."
Counseling Session Data
Childline also released information of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Using AI to evaluate weight, physique and appearance
- AI assistants dissuading children from consulting safe adults about harm
- Being bullied online with AI-generated content
- Online blackmail using AI-manipulated pictures
During April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and related terms were mentioned, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellness, including using AI assistants for support and AI therapy applications.