Government drops AI copyright opt-out plan after industry backlash
The government has scrapped its preferred proposal to allow AI firms to train on copyrighted material by default, as it shifts its approach after sustained opposition from the UK’s creative sector.
In a report published on Tuesday morning, ministers confirmed they are U-turning from the proposed ‘opt-out’ model, which would have allowed developers to use copyrighted works unless rights holders actively blocked access.
It will instead revisit the issue without a single preferred option.
The decision follows moths of cumulative criticism from artists and media groups, who have long argued the model would weaken copyright protections and undermine the commercial foundations of the UK’s £146bn creative sector.
Announcing the change, tech secretary Liz Kendall said the government had “listened” to concerns and would take more time to find a workable framework.
“At the end of 2024, the government’s preferred way forward was to enable AI developers to train on copyrighted works, with an opt-out for rightsholders”, she said.
“This was overwhelmingly rejected, we can confirm that the government no longer has a preferred option”.
The reset brings the public sector closer to the position initially set out by the House of Lords Communications and Digital Committee, which warned earlier this week against introducing a sweeping copyright exemption and instead backed a licensing-first approach.
Peers argued there was “no sound basis” for weakening copyright protections through opt-out means, and called for stronger transparency requirements so creators can see how their work is being used in AI training.
AI growth vs creator rights
Ministers have been clear they do not want to choose between supporting a fast-growing AI sector or protecting industries that supply much of the data such systems rely on.
“The UK must be an AI maker, not an AI taker,” Kendall said, while insisting creative industries remain “one of our greatest exports” and central to the government’s industrial strategy.
The report sets out a series of next steps rather than a definitive policy, including work on labelling AI-generated content, improving transparency over training data, and exploring how creators can better control the use of their work online.
A consultation on so-called ‘digital replicas’, where AI mimics a person’s likeness or style, is also expected later this year.
The government will also continue developing a Creative Content Exchange, intended as a marketplace for licensing digital content, though details remain limited.
For AI firms and startups, the lack of a clear framework leaves some uncertainty.
Vinous Ali, deputy executive director of Startup Coalition, said: “It is disappointing that a more concrete way forward hasn’t yet been found. However, we commend the government’s determination to get this right.”
He added: “It is critical we find a workable solution that allows our AI startups to go toe to toe with competitors operating in more enabling environments.”
That point reflects a broader concern in the sector that regulatory ambiguity could slow progress, particularly as other jurisdictions move ahead with clearer, if still contested, rules.
The US, EU and Japan have all taken different approaches to text and data mining, with varying degrees of flexibility for AI developers.
The UK’s decision to step back from its initial proposal risks extending that period of uncertainty, even as ministers position the country as a global AI hub.
And att the same time, pressure from the creative industries has intensified.
A coalition of major publishers including the BBC, Financial Times and Guardian recently warned that AI systems are already using journalism as training data without permission or payment, calling for clearer standards and enforcement.