AI regulation needs to be smarter in the UK, urges Deepmind boss

Demis Hassabis, chief executive of Google Deepmind, has called on UK policymakers and business heavyweights to embrace the “smart regulation” of AI, warning that a failure to do so could prompt unintended harm, or the loss of public trust.
He announced at London’s inaugural SXSW event on Monday: “AI is the most important technology humanity is working on. We should be making sure we do it properly – in a way that’s safe, that gets public buy-in, and that unlocks economic value”.
This comes at a time of mounting pressure on the UK government to address gaps in its regulatory regime for the technology, with business leaders and campaigners alike calling for clear rules to guide both innovation and implementation.
“Don’t just move fast and break things”
In his talk, Hassabis argued against the Silicon Valley ethos of “moving fast and breaking things”, warning that AI requires a fundamentally unique approach.
“I’d rather we didn’t move fast and break things”, he announced “especially the breaking things part”.
He continued: “For something this fundamental, it is important to try and have as much foresight ahead of time as you can.”
He added that AI’s capacity for “immense positive impact” has made it significantly more pivotal to apply ethics and caution from the get go.
At the same event, the Nobel prize winner noted for the need of “smart regulation – not regulation for the sake of it, but the kind that helps build trust and ensures the benefits of AI are broadly distributed.”
Government under pressure over AI rules
Hassabis’s warning follows a recent report on the UK’s approach to new bio-metric technologies, which warned of “significant gaps and fragmentation across bio-metrics governance”.
The research, conducted by the Ada Lovelace Institute, said the country risked becoming a “wild west for facial recognition”, if the right regulation is not put in place.
“The highly fragmented nature of bio-metrics governance makes it hard to know if police use of (facial recognition technology) is lawful”, the report said.
Meanwhile, researchers highlighted its growing unfettered use by police and retailers in the UK, often with little oversight or transparency.
Nearly 5m faces were scanned by the police in 2024, with over 600 arrests made, according to figures compiled by Liberty Investigates.
The technology lacks “legal protections”, argued Sarah Simms of Privacy International. “It does require specific safeguards due to the nature of how the technology operates and the implications for people’s human rights”.
Industry calls for coherent oversight
Amid calls for reform, pressure has also been stepped up by MPs and Lords.
Conservative peer Lord Chris Holmes has promoted a private members’ bill to establish a new AI authority and codify principles around safety, transparency, and accountability.
“There have been examples of (the adoption of AI) where it’s been, if you will, ‘rush and grab’, and you’ve seen some very negative and unfortunate results”, he stated at a recent conference.
“The chances of having that clarity and certainty… would be extremely unlikely”.
Lord Tim Clement-Jones, a longtime advocate for digital rights, also said that algorithmically generated decision-making across areas like social welfare, or even immigration, continues to operate in a “transparency vacuum”.
He added that these decisions have little recourse for those affected.
Government signals positive shift
The UK government has so far opted for a “principles-based” regulatory model, relying on sector-specific regulators to apply rules around artificial intelligence.
Yet, the King’s speech last July signalled a positive shift, with ministers pledging to legislate to impose binding obligations on developers of AI systems.
The upcoming ‘digital information and smart data bill’ is expected to include reforms to data and technology law, while an AI safety institute has been given more authority to evaluate risks whilst enforcing compliance.
Still, many in the industry say the UK must act faster to keep up with global competitors.
“We’re in a situation where we’ve got analogue laws in a digital age”, said Charlie Welton of Liberty.