The UK’s Information Commissioner’s Office (ICO) and Ofcom opened simultaneous investigations into X and xAI on February 3, 2026, after Grok was used to generate non-consensual sexual imagery — including of children. The regulatory action forced a broader reckoning: the Online Safety Act contained a legal gap that couldn’t cover standalone AI chatbots like Grok. On February 16, Prime Minister Sunak announced an amendment to the Crime and Policing Bill to close the loophole.
What Triggered the Investigation
Grok’s image generation capabilities lacked adequate guardrails against generating non-consensual intimate images. Ofcom identified that while the Online Safety Act covered content on social media platforms, it did not extend to standalone AI chatbot interfaces — a gap that Grok’s architecture exploited. The amendment creates new criminal offences for creating non-consensual intimate AI images of adults and extends illegal content duties to all AI chatbot providers including ChatGPT, Gemini, Copilot, and Grok.
The Broader Data Question
Beyond image generation, the ICO investigation examines what data Grok collects from X users, how it’s processed, and whether users have meaningful consent. X’s 600 million monthly active users generate training data for Grok through their posts, interactions, and browsing patterns. Unlike Anthropic, which publishes detailed data handling documentation, xAI has provided limited transparency about Grok’s training data sources and retention practices.
For users of any AI platform, the UK investigation establishes an important precedent: AI companies can be compelled to explain their data practices, and regulatory gaps will be closed rather than tolerated. The amendment’s scope — covering all AI chatbot providers — means this is not targeted at Musk personally but at the entire industry’s data handling practices.
