AI, AI Baby: Regulators' reactions to Grok’s egregious errors
Kate Washington - 30 January 2026
Recent restrictions placed on Grok, the generative AI chatbot developed by Elon Musk’s xAI and integrated into X (formerly Twitter) reveals an important development in how regulators are approaching AI.
Their response to Grok suggests a growing willingness to intervene pre-emptively, rather than addressing harm only after it has occurred.
This offers a significant contrast with earlier AI controversies and is telling about how legal work in this area is evolving. Where an AI’s design makes harmful behaviour predictable and repeatable, regulators seem to be increasingly prepared to intervene early.
Significant concerns about Grok arose after mass reports that its image-generation features were being used to create explicit, non-consensual images of real people (both celebrities and the general public). Statistics were as staggering as 3 million explicit images being created in less than two weeks. In response, regulators moved quickly to restrict access to the tool.
Regulators in several jurisdictions, including Malaysia, Indonesia and the Philippines, restricted access to Grok altogether as a preventative measure while regulatory processes were ongoing.
Ofcom has now launched an investigation into X due to the severity of content being created by Grok (at the request of X users), as it includes “sexualised images of children”. The Online Safety Act 2003 is the primary source of legislation being used by the UK government to hold X to account. This legislation places the onus on the social media platform to assess the risk of UK citizens encountering illegal content, take appropriate steps to prevent illegal content being disseminated to people in the UK, and to ensure any illegal content is removed as soon as they become aware of it.
Australia, France, India and the European Commission have also launched investigations into the platform. This intervention is focused on the design and systemic capabilities of Grok itself, rather than on isolated instances of individual misuse, unlike previous litigations and regulations around AI.
This approach contrasts with how other high-profile AI incidents have been treated. Currently, a litigation case involving ChatGPT and the suicide of a teenager is taking place in the US. Legal scrutiny is centred on the specific facts of the interaction between the chatbot and young person, and the question is a matter of identifying who was liable for the harm caused. The response has followed a familiar legal route: courts assessing liability after the event, rather than regulators acting to limit or regulate the use of ChatGPT.
By contrast, the response to Grok reflects a proactive approach, where authorities act pre-emptively to prevent foreseeable harm arising from the way a system operates at scale. Updates to certain legislation (such as the Sexual Offences Act 2003, which now includes provisions criminalising offences relating to digital content) also shows a dynamic approach from the UK to stay afloat in an ever-evolving tech-driven world.
This shift has practical consequences for the legal industry. Lawyers advising technology clients will no longer be focused solely on responding to litigation or enforcement after launch. Instead, legal advice will likely be moving upstream into product development, risk assessment and governance. Questions about safeguards, content controls and compliance structures will likely become central to commercial decision-making.
AI regulation is becoming less and less a niche and speculative area. The contrasting treatment of Grok as a platform-wide issue and previous isolated incidents illustrates a broader shift in both the rate of progression of AI and how the law responds.
As AI becomes embedded across industries, lawyers will play an increasingly central role in shaping how these tools are governed and advising clients on the regulatory and liability issues raised by new technology.