- xAI, Elon Musk’s artificial intelligence company, secured $20bn in a major funding round even as its chatbot Grok faces global criticism for generating sexualised and non-consensual imagery.
- Regulators and civil society groups are calling for tighter oversight of AI content moderation, highlighting broader concerns about safety and platform governance.
What happened: xAI funding milestone and Grok backlash
Elon Musk’s AI firm xAI has announced a $20bn investment round that values the company at about $250bn, a major milestone in its push to compete in the increasingly crowded artificial intelligence sector. The funding round drew support from major investors and underscores continued confidence in AI innovation and commercial potential.
At the centre of xAI’s public profile is its AI chatbot Grok, which is integrated into Musk’s social media platform X (formerly Twitter). Originally marketed as an AI companion capable of conversational responses and creative tasks, Grok has sparked a wave of criticism over its content generation. Critics say the system has been leveraged by users to produce sexualised, non-consensual images of women and minors, particularly through requests on X that manipulate photographs or generate explicit imagery.
A recent report by the Internet Watch Foundation (IWF) highlighted how Grok’s image generation feature was used to create deeply concerning material, including content resembling child sexual abuse imagery. The watchdog described users on a dark web forum boasting about techniques for creating such content, which has also prompted calls for urgent intervention by regulators.
The funding announcement comes at a time of heightened scrutiny. In the United Kingdom, the House of Commons Women and Equalities Committee has voiced serious concern about the proliferation of explicit AI-generated content on X and is considering actions ranging from fines to platform restrictions. In the European Union, regulators have ordered X to retain all documents related to Grok until the end of 2026 as part of compliance reviews under the Digital Services Act.
Also Read: Tesla inks $16.5B chip deal with Samsung
Also Read: Brazil warns carmakers could halt output within weeks
Why it’s important
The juxtaposition of xAI’s massive capital raise with the backlash surrounding Grok illustrates a central tension in the current AI landscape. On the one hand, investors are pouring money into advanced AI firms in pursuit of transformative technology and commercial returns. On the other, the societal and ethical implications of these technologies are prompting urgent questions about oversight, accountability and safety.
Deepfake technology and image generation have rapidly evolved, and Grok’s ability to produce explicit, manipulated content has highlighted weaknesses in current moderation systems. This echoes broader concerns raised by AI experts who argue that many generative models, when exposed to unfiltered training data or insufficient safety constraints, can produce harmful outputs. As one AI researcher put it in a recent analysis of similar technologies, “models are reflective of the data they’re trained on,” and without robust safeguards even capable systems can produce offensive content.
Regulators in multiple jurisdictions are now stepping in. The European Commission’s retention order for Grok documents suggests it is preparing for deeper examination of xAI’s compliance with the Digital Services Act, particularly in relation to illegal and harmful content on platforms like X. Likewise, calls from India’s Ministry of Electronics and Information Technology for an action plan underscore the global nature of regulatory concern over AI content moderation.
For users and civil society groups, the core issues are not merely commercial. They revolve around trust, safety and the real-world impact of AI systems that can be exploited to generate harmful material. The IWF’s intervention, for example, stresses the danger of AI making such content more accessible and mainstream, potentially bypassing traditional safeguards against child sexual exploitation.
Critics also question whether the rush to monetise AI technology has outpaced the development of responsible governance frameworks. If major platforms are unable to effectively moderate powerful generative tools, the consequences may extend beyond isolated incidents to broader harms affecting vulnerable groups and public discourse.
