- Britain’s media regulator is pressing ahead with a formal investigation into Grok’s alleged production of sexually explicit deepfakes.
- Canada’s privacy watchdog has expanded its probe to include Musk’s AI unit, xAI, over similar concerns.
What happened:Regulators in both countries continue investigations into X and its AI over deepfakes.
British media watchdog Ofcom confirmed on Thursday that its formal investigation into Elon Musk’s social media platform X and its AI chatbot Grok will continue, despite recent changes by the company to curb problematic image-editing capabilities.
Ofcom said it welcomed xAI’s restrictions on Grok’s ability to edit images of real people into sexually revealing clothing such as bikinis — a tweak aimed at addressing concerns raised by regulators — but added that its probe “remains ongoing” as it seeks to fully understand how Grok came to generate such material and whether X complied with UK law.
The investigation centres on reports that Grok was used to create and share non-consensual sexualised deepfakes, including images of women and children in degrading contexts, raising potential breaches of Britain’s Online Safety Act and related safeguards.
Meanwhile in Canada, the Privacy Commissioner’s office said it is expanding an existing probe into X to include a related investigation into xAI, the company behind Grok, over similar concerns about the creation and distribution of sexually explicit deepfake content.
Both regulators stressed that adjustments by Musk’s companies do not preclude accountability, and that more work is needed to assess compliance with privacy and safety laws.
Also read: UK regulator says its x deepfake probe will contiune
Also read: Analysis-Musk dealt blow over Grok deepfakes, but regulatory fight far from over
Why it’s important
The sustained regulatory pressure on Grok highlights the growing global challenge of governing AI systems capable of generating realistic but harmful content. Deepfakes have existed for years, but integration into widely used platforms like X — with ease of access — raises new legal and social risks.
In the UK, Ofcom can impose fines of up to £18 million or 10 per cent of global revenue and even seek court-ordered blocks if laws such as the Online Safety Act are breached. Canada’s expanded probe signals a willingness among privacy authorities to look beyond data collection to the outputs of AI tools when personal information is used without consent.
Taken together, these actions may set precedents for how regulators around the world address non-consensual image manipulation, AI safety and platform responsibility, at a time when both industry and lawmakers are racing to catch up with rapid technological change.
