- UK government urges Elon Musk’s X to curb harmful AI-generated images from Grok.
- Regulators escalate pressure as France, India and EU condemn outputs as unlawful.
What happened: government presses for urgent action
Britain’s government has publicly told Elon Musk’s social network X to act immediately to stop its AI tool Grok from producing sexualised, non-consensual images of individuals, including women and minors. The UK’s Technology Secretary, Liz Kendall, described the proliferation of intimate deepfake images as “absolutely appalling” and called on the platform to address the problem urgently.
Reports from Reuters show that users were able to prompt Grok to generate images of women and children in skimpy clothing, raising alarm among European authorities. Creating or sharing such images without consent is illegal in the UK, and platforms must prevent users from encountering this content and remove it when aware.
X’s official Safety account stated the platform removes illegal content and permanently bans accounts involved, and that anyone prompting Grok to make illicit material would face the same consequences as if uploading it directly. Elon Musk reportedly reacted to the controversy with laughing emojis on social media, a response that drew further criticism of the company’s seriousness on safety.
The UK regulator Ofcom has contacted both X and its AI arm xAI to ensure compliance with duties under the Online Safety Act. France’s authorities have reported the issue to prosecutors, and India’s government has demanded explanations, labelling the content obscene and a failure of platform safeguards.
Also Read: UK invests £210 million to bolster public sector cyber defences
Also Read: G.Network sold to distressed investor as UK fibre sector under increasing strain
Why it’s important
The incident underscores growing tensions between generative AI technologies and existing laws designed to protect users from harm. Grok’s ability to produce exploitative deepfakes highlights gaps in content moderation at a time when similar tools from other companies — such as OpenAI and Google — enforce stricter safeguards.
With multiple jurisdictions decrying the outputs as illegal or harmful, this episode may shape future AI governance and regulatory frameworks. It raises questions about platform accountability, the adequacy of current laws like the UK’s Online Safety Act, and how tech companies must balance innovation with user safety as AI-driven image generation becomes more accessible.
