British Tech Companies and Child Protection Agencies to Test AI's Capability to Create Exploitation Content
Technology companies and child protection organizations will be granted permission to evaluate whether artificial intelligence systems can produce child exploitation images under new British laws.
Significant Rise in AI-Generated Harmful Content
The announcement came as findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the changes, the government will allow approved AI companies and child protection groups to examine AI models – the foundational technology for conversational AI and visual AI tools – and verify they have sufficient protective measures to prevent them from producing images of child exploitation.
"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the danger in AI models promptly."
Tackling Legal Obstacles
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such content as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by enabling to stop the creation of those materials at their origin.
Legislative Framework
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or sharing AI models designed to create exploitative content.
Practical Impact
This recently, the official visited the London base of a children's helpline and heard a mock-up call to counsellors involving a report of AI-based abuse. The interaction portrayed a teenager requesting help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I learn about young people experiencing extortion online, it is a source of extreme frustration in me and rightful anger amongst families," he stated.
Concerning Data
A prominent internet monitoring foundation stated that cases of AI-generated exploitation content – such as webpages that may include multiple images – had significantly increased so far this year.
Instances of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of prohibited AI depictions in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a vital step to guarantee AI products are secure before they are released," stated the head of the internet monitoring foundation.
"AI tools have made it so victims can be targeted repeatedly with just a simple actions, giving criminals the ability to make potentially endless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which additionally exploits victims' trauma, and makes young people, especially female children, less safe both online and offline."
Support Interaction Information
Childline also published information of counselling sessions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Employing AI to rate weight, physique and looks
- Chatbots dissuading children from talking to trusted adults about harm
- Being bullied online with AI-generated material
- Online extortion using AI-faked pictures
Between April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and related terms were discussed, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using chatbots for assistance and AI therapeutic applications.