Are ministers ignoring the AI copyright crisis?

The UK government recently rejected a House of Lords-backed amendment to the data bill for the second time, intensifying a stand-off between policy makers and the creative industries over the role of copyright in AI development.
At the centre of this dispute is a provision, introduced by Baroness Beeban Kidron, that would require AI firms to disclose the copyrighted works used to train their models and secure consent from rights holders.
Supporters have said the amendment is vital to safeguarding the intellectual property in the face of rapid AI expansion.
Ministers argue the proposal risks fragmenting regulation and undermining ongoing consultations.
Despite strong cross-party support in the House of Lords, the House of Commons voted again this week to remove the amendment, leaving the bill in legislative limbo.
Creative sector backlash
This legislative deadlock has drawn sharp criticism across the creative industries.
Sir Elton John, one of over 400 signatories to a letter urging tougher safeguards, accused the government of enabling “theft” by allowing tech firms to train generative AI tools on copyrighted material without approval or remuneration.
“It’s criminal”, he told the BBC. “They’re robbing young people of their legacy and income”.
Other artists involved included Sir Paul McCartney, Dua Lipa and Kate Bush, while organisations such as UK Music and the Publishers Association warned of long-term economic and cultural damage if protections are not enforced.
“The government is on the brink of offering up the country’s music industry as a sacrificial lamb, said Tom Kiehl, chief executive of UK Music. “We cannot let AI firms strip-mine our cultural output without consequence”.
The concerns extend beyond music. Authors, visual artists and publishers have also voiced their concerns over generated AI models ingesting troves of online content, without either permission or compensation.
Dan Conway, chief executive of the Publishers Association, described it as a “great copyright heist”, calling for AI developers to pay for the creative and research content used to train their models.
“We are not opposed to innovation”, he added, “but teenage demands from the tech sector for free content must be challenged”.
Government’s cautious approach
The government has thus far resisted direct regulatory intervention. A spokesperson said that no changes to copyright law would be made, “unless we are completely satisfied they work for creators”.
Minister Baroness Neville-Rolfe, speaking in the House of Lords, said that no country has yet solved the AI copyright debate.
“We are exploring the options thoroughly”, she said, “but premature legislation could create legal uncertainty and stifle innovation”.
Instead, the government has committed to further consultations and will publish an economic impact assessment later this year.
It is also considering a new “opt out” strategy, under which copyright holders must flag should they wish not to be included in AI training data.
Yet, critics argue that this reverses the burden of responsibility and favours Silicon Valley heavyweights over UK-based creators.
The Society of Authors, which represents professional writers, said the model allows commercial AI use without any clear consent.
“AI tools have benefited tremendously from the hard work of UK authors – without their knowledge, and against their will”, said the firm’s chief executive, Anna Ganley.
“The government seems content to let these practices continue to protect the interests of the tech sector”.
Industry solutions
While much of the current debate revolves around regulation and rights, figures in the tech sector are calling for a more thoughtful middle ground, one that recognises the value of the UK’s creatives and the potential of AI simulatenously.
Andy Parsons, senior director of content authenticity at Adobe, spoke to City AM about the “flood of AI-generated content” in the creative space, as well as growing public confusion over what is real, who made it, and how it is made.
Adobe has launched a content authenticity program as an attempt to respond to this problem.
“We sometimes think of this as a nutritional label”, he explained, “just like you can walk into any food shop and you don’t have to exercise your basic right to know what’s in the food – you read the label. We think it should be the same with digital content”.
The firm’s system seeks to solve this issue by embedding information like authorship or edits directly into the file.
“It’s a measure towards online safety and transparency that should be non-controversial and non-partisan”, he claimed.
In Parsons’ view, giving creators control is absolutely key. He said to City AM: “Creators who use our tools want to make sure that their livelihoods are safeguarded… in a world where AI is encouraged and helpful, but not, in our point of view, intended to replace them in any way”.
“It’s a very distinct technical challenge to make sure the creators who contribute to those datasets are compensated in a meaningful way”, he told City AM. Yet, its a hard problem to solve.
Looking ahead, he argued that digital authenticity tools like content credentials could be a foundation for future licencing systems and public trust. “I think authentic content is going to be more valuable than ever as a result of AI”, he added.
Commercial momentum outpaces regulation
Despite policy uncertainties, commercial roll-out of AI-powered creative tools is accelerating.
Amazon-owned Audible recently announced its plans to use AI-generated voices for audiobook narration and translation.
The firm is offering publishers both end to end production services and self-service tools to generate synthetic voices in multiple languages.
While publishers are exploring efficiencies, author groups remain wary.
“Opportunities must be transparent both to authors and consumers”, said Ganley. “Authors must be included in the process and must be able to choose whether their work is narrated by a human or synthetic voice”.
There are also concerns that AI-generated works may end up feeding future AI training sets, worsening the risk of value decline in creative professions.
As parliament continues to debate the data bill, pressure is mounting for a coherent UK strategy on AI and intellectual property.
Creative industry leaders have called on prime minister Keir Starmer to intervene directly and ensure that regulation keeps pace with technological change.
In the meantime, many in the sector feel that the country risks undermining its global leadership in creative rights.