Screenshots on X showed Grok’s media tab filled with images users said were altered by the bot
“As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited,” Grok said, referring to Child Sexual Abuse Material. PHOTO: REUTERS
Elon Musk’s xAI artificial intelligence chatbot Grok said on Friday lapses in safeguards had resulted in “images depicting minors in minimal clothing” on social media platform X and that improvements were being made to prevent this.
Screenshots shared by users on X showed Grok’s public media tab filled with images that users said had been altered when they uploaded photos and prompted the bot to alter them.
Read More: Meta buys China-founded AI agent Manus
“There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,” Grok said in a post on X. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”
“As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited,” Grok said, referring to Child Sexual Abuse Material.
Grok gave no further details.
In a separate reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said “no system is 100% foolproof,” adding that xAI was prioritising improvements and reviewing details shared by users.

