Move follows sharp rise in deepfake content and concerns over online abuse, fraud
LONDON:
Britain will work with Microsoft (MSFT.O), academics and experts to develop a system to spot deepfake material online, the government said on Thursday, as it moves to set standards for tackling harmful and deceptive AI-generated content.
While manipulated material has circulated online for decades, the rapid adoption of generative AI chatbots – made possible through the launch of ChatGPT and others – has amplified concerns about the scale and realism of deepfakes.
Britain, which recently criminalised the creation of non-consensual intimate images, said it was working on a deepfake detection evaluation framework to set consistent standards for assessing detection tools and technologies.
“Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” technology minister Liz Kendall said in a statement.
Governments spurred into action by non-consensual images
The framework will evaluate how technology can be used to assess, understand and detect harmful deepfake materials, regardless of its source, the government said, by testing deepfake detection technologies against real-world threats like sexual abuse, fraud and impersonation.
That would help the government and law enforcement obtain better knowledge on where gaps in detection remain, it said, adding that the framework would be used to set clear expectations for industries on deepfake detection standards.
An estimated 8 million deepfakes were shared in 2025, up from 500,000 in 2023, according to government figures.
Governments and regulators worldwide, who are struggling to keep pace with the rapid evolution of AI technology, were spurred into action this year, as Elon Musk’s Grok chatbot was found to generate non-consensual sexualised images of people, including children.
The British communications watchdog and privacy regulator are carrying out parallel investigations into Grok.

