Wednesday, January 7, 2026

HD FLASH NEWS

Where Information Sparks Brilliance

HomeBusinessOfcom asks X about reports its Grok AI makes sexualised images of...

Ofcom asks X about reports its Grok AI makes sexualised images of children


Ofcom has made “urgent contact” with Elon Musk’s company xAI following reports its AI tool Grok can be used to make “sexualised images of children” and undress women.

A spokesperson for the regulator said it was also investigating concerns Grok has been producing “undressed images” of people.

The BBC has seen several examples on the social media platform X of people asking the chatbot to alter real images to make women appear in bikinis without their consent, as well as putting them in sexual situations.

X has not responded to a request for comment. On Sunday, it issued a warning to users not to use Grok to generate illegal content including child sexual abuse material.

Elon Musk also posted to say anyone who asks the AI to generate illegal content would “suffer the same consequences” as if they uploaded it themselves.

XAI’s own acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner”.

But people have been using Grok to digitally undress people without their consent and without notifying them.

The European Commission – the EU’s enforcement arm – said on Monday it was “seriously looking into this matter” and authorities in France, Malaysia and India were reportedly assessing the situation.

Meanwhile, the UK’s Internet Watch Foundation told the BBC it had received reports from the public relating to images generated by Grok on X.

But it said it had so far not seen images which would cross the UK’s legal threshold to be considered child sexual abuse imagery.

Grok is a free virtual assistant – with some paid for premium features – which responds to X users’ prompts when they tag it in a post.

Samantha Smith, a journalist who discovered users had used the AI to create pictures of her in a bikini, told the BBC’s PM programme on Friday it had left her feeling “dehumanised and reduced into a sexual stereotype”.

“While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” she said.

Under the Online Safety Act (OSA), Ofcom says it is illegal to create or share intimate or sexually explicit images – including “deepfakes” created with AI – of a person without their consent.

Tech firms are also expected to take “appropriate steps” to reduce the risks of UK users encountering such content, and take it down “quickly” when made aware of it.

Dame Chi Onwurah, chair of the Science, Innovation and Technology Committee, said the reports were “deeply disturbing”.

She said the Committee found the OSA to be “woefully inadequate” and called it “a shocking example of how UK citizens are left unprotected whilst social media companies act with impunity”.

And she called for the government to take up recommendations by the Committee to compel social media platforms “to take greater responsibility for their content”.

Meanwhile, European Commission spokesperson Thomas Regnier said on Monday it was aware of posts made by Grok “showing explicit sexual content,” as well as “some output generated with childlike images”.

“This is illegal,” he said, also calling it “appalling” and “disgusting”.

“This is how we see it, and this has no place in Europe,” he said.

Regnier said X was “well aware” the EU was “very serious” about enforcing its rules for digital platforms – having handed X a €120m (£104m) fine in December for breaching its Digital Services Act.

A Home Office spokesperson said it was legislating to ban nudification tools, and under a new criminal offence, anyone who supplied such tech would “face a prison sentence and substantial fines”.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments