- The European Commission and UK watchdogs are reviewing X’s compliance with EU digital rules, focusing on Grok’s harmful content generation.
- Probes in multiple countries reflect broader regulatory unease over AI content governance, deepfakes, and user protection standards.
What happened: EU and UK launch probes into X and Grok’s AI risks
The European Commission formally announced on 26 January 2026 an investigation into whether X complied with its obligations under the Digital Services Act (DSA), particularly in assessing and mitigating risks associated with Grok before its deployment in the EU.
The EU’s review focuses on whether X conducted the required independent risk assessment and whether it identified and addressed potential harms from AI‑generated outputs, including the spread of illegal or harmful material. At the same time, the UK’s Information Commissioner’s Office (ICO) has launched a parallel probe into Grok over concerns about personal data processing and the generation of harmful sexualized imagery, highlighting serious potential privacy and safety risks.
Authorities in France have also entered the fray, with prosecutors raiding X’s Paris offices as part of a coordinated investigation into alleged offenses linked to harmful deepfakes and non‑consensual content. These actions come amid wider global concerns around platform governance and the responsibility of operators when deploying AI tools that can create deepfakes and other risky content.
Also Read: Telefónica Tech UK&I unveils AI-driven managed Security Service Edge for British and Irish firm
Also Read: Vertiv targets AI data centre growth with predictive maintenance
Why it’s important
The investigations into X and Grok reflect growing unease among regulators about the potential harms stemming from generative AI when embedded within widely used online platforms. The DSA and similar laws in the UK and EU aim to hold platforms accountable not only for user‑generated content but also for the AI models they provide. The European Commission’s focus on risk assessment compliance underscores how regulators now expect rigorous pre‑deployment evaluations—not just reactive measures after harm occurs.
This regulatory clampdown raises broader questions about the adequacy of current governance frameworks. Platforms may need to enhance transparency around how AI tools like Grok are trained, tested, and moderated. It also underscores the tension between innovation in AI and the imperative to protect users—particularly vulnerable groups—from harmful, exploitative, or illegal content. Critics argue that without clearer standards on content moderation and risk mitigation, platforms may inadvertently amplify risks despite good intentions.
Moreover, as generative AI becomes more capable and ubiquitous, these cases highlight an urgent need for international cooperation in AI governance, given that platforms like X operate across multiple legal jurisdictions. Policymakers and industry alike will need to navigate the balance between fostering technological advancement and ensuring robust safeguards that protect users in an increasingly AI‑driven online environment.
