• The UK’s media regulator Ofcom has launched a formal investigation into Elon Musk’s platform X and its AI chatbot Grok following concerns that the tool enabled generation and sharing of sexually explicit and illegal deepfake imagery.
• Prime Minister Keir Starmer condemned the content as “disgusting” and unlawful, and authorities are considering legal and regulatory actions that could include fines or service restrictions if X fails to comply with safety laws.
What happened: Ofcom probe into Grok misuse
The UK’s Office of Communications, known as Ofcom, has opened an investigation into social media platform X over alleged harmful outputs from its AI chatbot Grok, particularly sexually explicit and deepfake images that may violate UK laws. The probe is focusing on whether X adequately assessed and mitigated the risk of Grok generating and disseminating illicit content, especially material involving non-consensual intimate images and child sexual abuse imagery.
Reports indicate that users prompted Grok to produce sexually suggestive images by digitally altering photographs of individuals without their consent. In response to intense public and regulatory criticism, X and the AI’s developer xAI restricted Grok’s image generation and editing features on the main platform, limiting them to paying subscribers — a measure intended to curb misuse. However, critics argue that this does not fully address the underlying issue because separate access points, such as the standalone Grok app, have remained capable of producing such images.
The UK government has taken a strong stance. Prime Minister Keir Starmer described the incident as “absolutely disgusting and shameful”, warning MPs that if X cannot control Grok’s output, the government will intervene quickly, including by exercising regulatory powers to restrict or control the service. Technology Secretary Liz Kendall and other officials have echoed this urgency, linking the AI’s misuse to broader concerns about online safety and the abuse of digital technologies.
Ofcom’s investigation will assess compliance with the UK’s Online Safety Act, which requires platforms to protect users from illegal content, including material involving sexual exploitation and abuse. The regulator has the authority to seek court orders that could disconnect X from UK payment systems or internet access if it finds serious violations.
This development in the UK comes amid wider global scrutiny. Several countries, including Malaysia and Indonesia, have restricted access to Grok over concerns about its AI imagery capabilities, citing risks of offensive, obscene or non-consensual content generation. Likewise, the European Commission has also signalled examination of X’s compliance with regional online safety laws.
Also Read: Elon Musk to open source X algorithm amid global scrutiny
Also Read: Elon Musk’s xAI raises $20bn amid mounting backlash over Grok AI deepfakes
Why it’s important
The Ofcom investigation into Grok highlights significant tensions at the intersection of generative artificial intelligence, content moderation and legal responsibility. As AI systems become capable of synthesising realistic images, the potential for misuse — including creating harmful or illegal material — increases pressure on regulators to enforce existing laws and develop new frameworks tailored to generative AI technologies.
The controversy also underscores limitations in current platform governance models that rely heavily on user reporting and reactive measures. Critics of X’s response have argued that restricting image generation to paying users merely monetises the ability to create harmful content rather than eliminating it, and that a subscription requirement does not prevent the underlying technology from being abused.
Moreover, the UK’s approach, supported by senior politicians, suggests a shift toward criminalising not only the sharing of illicit imagery but also its creation through AI tools, a stance that carries implications for freedom of expression, platform liability and regulatory reach. Balancing safety, innovation and civil liberties will be a central challenge as authorities grapple with how to regulate AI without stifling beneficial use cases.
For X and Musk’s broader AI ambitions via xAI, the outcome of this investigation could influence user trust, platform governance policies and the willingness of governments to impose stricter obligations, including substantial fines or service restrictions. Global regulators and lawmakers will likely watch the UK’s actions closely, as they may set precedents for oversight of generative AI systems integrated into widely used social platforms.
