AI risks creating a cyber crunch for the UK’s national infrastructure

The UK’s cyber watchdog has warned that artificial intelligence (AI) could significantly increase the country’s exposure to cyber attacks, especially those across its critical infrastructure, unless urgent steps are taken.
In a recent report, the National Cyber Security Centre (NCSC) argued that AI is already transforming the threat landscape, and that by 2027, it could give malicious actors the tools to exploit system vulnerabilities faster than ever before.
The time between a vulnerability being discovered and weaponised has already diminished to a matter of days. Yet, with AI, that window could shrink to hours.
A digital divide
The NCSC is also warning of an impending digital divide between organisations that can keep pace with AI-powered threats, and those that can’t.
That disparity, the report revealed, could heighten overall risk across the country.
“There’s a realistic possibility that AI will make critical systems more vulnerable to attack within the next two or three years,” said Paul Chichester, the NCSC’s director of operations.
“But it also offers an opportunity – if organisations act now to harden their defences and adopt best practice.”
Chichester added: “The threat landscape is evolving rapidly. AI is fundamentally reshaping how threats are delivered. As we move forward, organisations must develop a strategy to manage these risks, or they will fall behind.”
The agency is calling on both private and public sector organisations to strengthen cyber resilience by embedding robust protections into all AI systems and their dependencies.
That includes using the NCSC’s cyber assessment framework, as well as following the government’s recently updated AI cyber security code of practice.
Recent high profile breaches
The NCSC’s warning follows a string of recent cyber attacks, targeting some of the UK’s biggest retailers.
Marks & Spencer, Harrods and Co-op have all been caught up in breaches that compromised customer data and disrupted operations.
Many of these incidents are believed to be linked to ransomware groups using increasingly sophisticated tactics.
“AI can be used as both a weapon and a defence,” said Chichester. “Organisations must realise that inaction today will make them far more vulnerable tomorrow.”
GCHQ has confirmed that the number of ‘nationally significant’ cyber attacks has more than doubled over the past year, underscoring how rapidly the threat landscape is evolving.
The NCSC is reportedly closely monitoring these developments, and working with organisations to mitigate the risks.
AI as national infrastructure
The warning comes as the UK government officially classifies AI as a form of national infrastructure, along with energy, water, and transport.
But as reliance on AI grows, so does energy demand. Data centres and AI model training are placing new strains on an already congested energy grid, prompting concerns over whether the UK’s energy infrastructure can scale quickly enough.
The threat of AI services becoming mission-critical while energy supply is still patchy could place both security and operational resilience at stake.
“There’s no question that AI is becoming essential to UK infrastructure”, Chichester said.
“However, as AI becomes more integrated into critical services, the need to ensure a reliable energy supply becomes even more urgent. Cyber security and energy resilience must go hand in hand”.
Earlier this year, PwC flagged a growing divide between firms investing proactively in cybersecurity and those failing to do so, warning that reactive postures are no longer sustainable in the AI era.
A race against time
While AI may supercharge productivity, it is also accelerating risk. And, as the government pushes to make AI a cornerstone of the UK’s economic future, the clock is ticking on whether organisations, particularly those responsible for essential services, can adapt at a fast enough pace.
“The risks are real, but so are the opportunities”, Chichester concluded. “It’s up to organisations to act decisively to ensure that AI-driven progress doesn’t come at the expense of their security”.