Introduction
As artificial intelligence becomes increasingly embedded in social media platforms, regulatory safeguards are being tested in new ways.
Recent scrutiny of Grok AI, an integrated chatbot and image-generation tool on X, has brought those concerns into sharp focus. Reports that users were able to generate sexualised images of real individuals, including children, have prompted regulatory intervention in the UK and abroad.
What Happened?
Grok AI launched in 2023 as an in-app assistant on X. By late 2025, reports emerged that users were generating indecent AI-created imagery of real people.
Some of this content was publicly accessible on X, raising concerns that it could be viewed widely before moderation.
X responded by restricting certain image-generation features to paying subscribers. However, criticism followed, with observers arguing that enforcement focused primarily on suspending users rather than addressing potential weaknesses in the system’s design.
UK Regulatory Action
Public concern led to formal regulatory engagement. Ofcom, the UK’s communications regulator, issued a notice to X and is assessing whether there have been breaches of the Online Safety Act. The Act imposes duties on platforms to prevent illegal and harmful content, particularly when children are involved.
The Information Commissioner’s Office (ICO) has also opened investigations into X entities to determine whether their rights under data protection law were infringed. This includes examining whether the system’s design enabled the creation of content that used individuals’ likenesses without consent.
At the centre of these inquiries is the question of whether adequate safeguards were put in place.
International Developments
French authorities are reportedly investigating the non-consensual generation of images, while California’s Attorney General has launched inquiries into xAI over the production of indecent material.
These parallel investigations demonstrate that AI-related harms are increasingly being treated as cross-border regulatory issues.
Legal and Commercial Implications
The creation of indecent images of children constitutes a serious criminal offence. Where platform design contributes to unlawful outcomes, regulators may consider whether companies failed to meet their legal obligations. Potential consequences include regulatory fines, enforcement action, civil claims, and significant reputational damage.
For law firms and their clients, the situation raises practical considerations. Organisations using AI tools must reassess contractual arrangements with providers, particularly where liability is allocated to users rather than platforms. Data protection compliance, indemnity clauses, and insurance exposure may all require closer scrutiny. As AI tools become more integrated into commercial systems, regulatory expectations are likely to increase.