UK Tech Firms and Child Protection Agencies to Examine AI's Ability to Create Exploitation Content
Tech firms and child safety organizations will receive authority to assess whether artificial intelligence systems can generate child abuse material under new UK legislation.
Substantial Rise in AI-Generated Harmful Material
The announcement came as findings from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the government will allow approved AI developers and child safety groups to inspect AI systems – the foundational systems for chatbots and image generators – and ensure they have adequate protective measures to stop them from creating depictions of child sexual abuse.
"Fundamentally about stopping abuse before it happens," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the danger in AI models early."
Addressing Legal Challenges
The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This law is aimed at preventing that problem by helping to halt the creation of those materials at source.
Legislative Structure
The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or sharing AI systems designed to create child sexual abuse material.
Real-World Consequences
This week, the minister toured the London base of Childline and listened to a simulated conversation to advisors involving a report of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a explicit deepfake of himself, created using AI.
"When I learn about children experiencing extortion online, it is a cause of intense anger in me and rightful concern amongst families," he stated.
Concerning Data
A prominent internet monitoring organization reported that instances of AI-generated abuse content – such as online pages that may contain numerous files – had significantly increased so far this year.
Instances of category A content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI images in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a crucial step to guarantee AI products are safe before they are released," commented the head of the online safety organization.
"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a simple actions, providing offenders the capability to create potentially endless quantities of advanced, photorealistic exploitative content," she added. "Material which further commodifies victims' suffering, and renders children, especially girls, more vulnerable on and off line."
Counseling Session Information
The children's helpline also published details of support sessions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Employing AI to evaluate weight, body and looks
- Chatbots discouraging children from talking to safe adults about harm
- Being bullied online with AI-generated content
- Digital blackmail using AI-faked pictures
Between April and September this year, Childline conducted 367 support sessions where AI, conversational AI and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapeutic apps.