Kendall faces tough test to rescue UK’s broken AI policy
Liz Kendall is expected to replace Peter Kyle as science and tech secretary in a major reshuffle by Keir Starmer, inheriting one of the government’s most urgent briefs: artificial intelligence.
Kyle is reportedly moving to the Department for Business, with the skills brief carved out of the Department for Education.
His departure from the tech role comes as Labour faces mounting criticism for failing to deliver on its pledge of binding AI regulation.
Broken promises
At Mansion House on Wednesday, tech secretary Peter Kyle struck an upbeat note. Britain, he argued, could leapfrog rivals by harnessing AI to cut red tape, streamline approvals and make the UK the most attractive home for fast-growing tech.
Regulators from Ofgem to the Civil Aviation Authority (CAA) will now trial AI systems, backed by £2.7m in funding, with a “regulatory hackathon” to follow.
It was a massage designed to reassure business: innovation first, safety secured through ‘smarter’ regulation.
But beyond the lukewarm words and pilot projects, the UK’s AI policy appears to be mired in a deeper issue.
While ministers promised robust, binding regulation on the most advanced systems. But today, a year into Labour’s government, there is still no AI bill in sight.
A year of drift
Labour’s manifesto pledged legislation to bind the handful of firms racing to build frontier AI models.
Peter Kyle himself warned at the Munich Security Conference that “losing oversight and control of advanced AI systems…would be catastrophic”.
Yet, despite repeated commitments, in both the King’s Speech and at London Tech Week in June, the government appears to have failed to publish even a consultation.
For campaigners, this is more than a bureaucratic delay. Andrea Miotti, chief executive of ControlAI, said: “Nobel Prize winners, top scientists and even AI chief executives warned that AI poses an extinction risk to humanity.”
“We’re now over a year into this government, and there is still no bill in sight, while AI companies are rushing to build super intelligence. Every delay puts us all at risk.”
Steven Adler, a former OpenAI researcher turned whistleblower, has warned publicly of the “internal pressures” inside labs that work against clear discussion of dangers.
“I don’t think we’re ready today. I don’t think we’re even close”, he told a ControlAI podcast, echoing OpenAI co-founder Sam Altman’s stark remark that in the worst case “AI might mean lights out for us all”.
Public appetite for oversight
If ministers are hesitant, the public seems less so. Recent polling by YouGov has shown overwhelming demand for regulation, with nearly four in five Brits supporting the creation of a UK AI regulator.
Overall 96 per cent want audits of powerful systems, 90 per cent back pre-approval before training frontier models, and 95 per cent wish for the power to shut down unsafe AI.
Meanwhile, only nine per cent of the public seem to trust tech executives to act in the national interest on safety.
The Artificial Intelligence Safety Institute (AISI), launched under the previous Conservative government, garnered similar support, with three-quarters of the public backing statutory regulatory powers. But, at present, the institute has no such authority.
Kyle’s Mansion House pitch reflects a familiar tension between Britain’s eagerness to brand itself as a hub for innovation and delaying the politically thorny task of legislating.
As Ben Bilsland of RSM UK, argued: “Streamlining approvals is welcome, but there’s danger of overselling what AI can deliver. Regulators need the resources and independence to use these tools responsibly”.
Business surveys have shown optimism: Barclays found 62 per cent of executives now see the UK as a more attractive base than Europe.
Yet, regulatory uncertainty appears to be one of the biggest drags on growth.