Grok scandal exposes Online Safety Act flaws
The UK’s relatively new Online Safety Act was sold as its long-overdue answer to the alarming harms of social media.
Brits were promised a framework tough enough to bar Silicon Valley access, tough enough to promote innovation, and strong enough to keep its users, particularly children, from digital abuse.
Barely months into its rollout, controversial tech billionaire Elon Musk’s AI chatbot, Grok, is testing whether that promise was slightly overstated.
Grok, built by Musk and embedded directly into his social media platform, X, has been used at scale to generate sexualised pictures of women and girls, with no consent.
It was reported this week that users have prompted the open-source tool to digitally undress women and kids, dressing them in bikinis, and recreate pornographic scenarios.
All of these findings were from public threads, visible to any other users scrolling past. And in some cases, the images involved underage minors.
It was confirmed that the platform produced dozens of sexualised images in a matter of minutes, including depictions of young girls.
High-profile women like Maya Jama publicly condemned the abuse after discovering AI-edited photos of themselves unclothed circulating on the platform.
For critics of the UK’s Online Safety Act, this is precisely the type of scenario they warned about.
A law built for platforms, not AI
The Online Safety Act was built around a clear model in which platforms host content, users post it, and regulators step in when any harm occurs.
But as tech accelerates, inevitably, generative AI has broken that logic.
In this case, Grok isn’t just widely distributing harmful, upsetting content; it is, arguably, more alarmingly, generating it altogether at industrial speed.
While the OSA has made it illegal to create or share explicit photos without consent, even when generated by deepfakes, it is far less black-and-white when the ‘creator’ is an automated system already embedded in a platform.
X, formerly Twitter, has treated its bot subsidiary as a separate product altogether, despite the images appearing on its feed, to its users, within UK jurisdiction.
And while this distinction may not survive legal scrutiny, it shows a structural weakness. The Act wouldn’t have anticipated AI systems acting like content engines, rather than neutral tools.
Ofcom under pressure
Regulator Ofcom has made ‘urgent contact’ with X and xAI to assess whether the firms are complying with their duties under the Act.
This forms a step forward compared with the pre-OSA era, when regulators often relied on voluntary cooperation, but enforcement credibility remains on the line.
The regulator has moved quickly against offshore pornography sites and smaller operators, fining firms seven-figure bills and threatening access blocks.
But Grok is on a much harder test as a high-profile, politically charged heavyweight, not to mention its backing by a company that raised $20bn in fresh funding this week.
If enforcement falters in this case, critics will see it as a confirmation that the Act has teeth only when biting easier targets.
Already outdated
The uncomfortable conclusion is that the Online Safety Act may be arriving, just as the ground shifts beneath it.
Designed for an internet of posts and platforms, it now faces an internet of models and machines.
Grok’s deepfake scandal does not mean the Act is useless, but it does suggest it is incomplete.
However, without quicker guidance, clearer red tape and an overall more proactive stance on AI-generated harm, the laws risk becoming another example of regulation being two steps behind.