UK backs global AI safety plan with AWS and Anthropic
The UK has launched a new global research coalition aimed at tackling one of the most pressing but least understood problems in artificial intelligence – ensuring that advanced AI systems behave in ways aligned with human values.
Backed by £15m in funding and a wide-ranging group of partners, the so-called ‘Alignment Project’ was unveiled on Wednesday by the government’s AI Safety Institute.
Supporters claim the initiative will help the UK play a leading role in shaping international AI standards and technical controls.
But some researchers remain sceptical over how far voluntary coalitions can go in keeping pace with rapidly advancing technologies – especially when access to the most powerful systems is still largely in private hands.
Growing urgency
The launch comes amid mounting global anxiety over ‘frontier’ AI models – powerful systems capable of tasks ranging from advanced code generation to autonomous decision-making.
UK tech secretary Peter Kyle positioned alignment research as critical national infrastructure.
“AI alignment is about making sure systems behave as we want them to”, he said. “That’s central to protecting our national security and unlocking the benefits of AI.”
But the UK, like other nations, is still in the early stages of developing meaningful tools to monitor or constrain such systems.
The AI Safety Institute’s own tests, conducted on models from OpenAI, Google DeepMind and Anthropic, have been limited by a lack of direct access to model internals, such as weights and training data, and offer no formal certification of safety.
There is therefore a risk that the government is overplaying its level of access, as these are voluntary arrangements, rather than hard oversight.
Private partnerships
The new alignment initiative will offer grants of up to £1m to academic and non-profit research teams, with additional computing resources provided by Amazon Web Services and Anthropic.
Industry players such as Schmidt Sciences and UKRI are also backing the project, while an international advisory board including Yoshua Bengio and Shafi Goldwasser, has been grouped to help shape its direction.
Yet the heavy reliance on corporate partners raises further questions about independence.
The world’s most powerful AI systems remain proprietary, and major labs have resisted calls for full transparency. While officials insist the UK is working closely with leading companies, critics argue the current framework lacks teeth.
Ian Hogarth, chair of the AI Security Institute, acknowledged the complexity. “Alignment research is a global public good”, he said. “We need deep international collaboration to ensure AI systems are trustworthy.”
But even he has previously warned of the risks of underestimating the pace of AI development, saying that regulators must “move faster or risk irrelevance.”
Leadership ambitions
The UK has positioned itself as a leader in global AI safety, hosting the world’s first AI Safety Summit last year and committing over £2.5bn to compute infrastructure, including new supercomputers in Bristol and Edinburgh.
The Alignment Project is the latest in a string of government-led announcements aimed at signalling both seriousness and capability.
But previous calls for ‘sovereign AI infrastructure’ from firms like Nvidia and UK-based Neos Networks have highlighted how far Britain still lags behind the US and China in terms of raw capacity and commercial investment.
And with the government’s forthcoming AI Infrastructure Roadmap still unpublished, industry figures have warned that policy is struggling to keep pace with investment.
“The private money is already moving”, Neos chief executive Lee Myall recently told City AM. “We’ve got good statements of intent. But we need more.”