Lack of trust in AI tempers Starmer’s tech push
Britain’s AI sector has entered the year with continued momentum, with its leading heavyweights raising over £20bn in private capital over the past year.
According to Startup Coalition’s latest AI index, British AI firms command a combined valuation north of £45bn. Business services accounted for roughly 400 AI startups, attracting £8.3bn, while financial services and healthtech companies have driven much of the recent growth.
The capital, though incomparable to Silicon Valley levels, seems to be there; and political backing seems to finally be there too.
Indeed a year ago, almost to the day, Keir Starmer promised to “mainline AI into the veins” of the UK economy through Matt Clifford’s AI Opportunities Action Plan, which places tech at the forefront of Labour’s growth agenda.
However, what has been less clear since, is whether business confidence is keeping up with adoption.
Hesitation within UK plc
Confidence in AI remains uneven within the private sector, both in the UK and globally.
In the US, Ipsos’ Consumer Tracker found that 31 per cent of respondents cited lack of trust in AI tools “to provide accurate or useful results” as the main reason for not using them more.
While that was the most common bottleneck, another 29 per cent felt they had no need for such tools, while 19 peer cent claimed to not see the benefits.
Only seven per cent of respondents cited affordability, and four per cent said they lacked access.
In the UK, a Tony Blair Institute for Global Change paper similarly found that 38 per cent of workers cited lack of trust as a barrier to adoption.
What’s more, 39 per cent said they saw the tool as a risk to the economy, compared with 20 per cent who see it as an opportunity.
The numbers reveal a widening gap between usage and confidence, where, despite adoption accelerating at dizzying rates, businesses continue to err on the side of caution.
Veeam chief executive Anand Eswaran and Securiti AI founder Rehan Jalil explained that the same tension is seen within larger companies.
Jalil described a North American bank that publicly announced plans to launch around 100 AI projects as part of a transformation programme.
Despite the announcement, he said, “they could not even turn on one AI project till… they actually had all the controls in place”, he said.
The delay stemmed from questions and concerns around governance; which data the models could access; who was authorised to see outputs; how compliance would be enforced, and how systems would be monitored.
Eswaran added: “The problems they’re facing is not going to be solved by point solutions”.
“The real need of the day… is a platform which brings together security, governance, resilience together in one platform”.
Veeam’s acquisition of Securiti AI, the executives claimed, aims to address that demand, combining the former’s data backup and recovery expertise, with the latter’s security and privacy systems.
Cybersecurity vs AI use
Before the rise of large language models (LLMs) like ChatGPT, business cybersecurity relied on structured databases and ‘one size fits all’ controls.
The dominant threats were ransomware and data extortion, used to lock and monetise critical systems and sensitive data.
But AI, on the other hand, has been designed to read across vast, entire databases, including emails, contracts, PDFs and other forms of material which used to sit outside core systems.
“What AI has done, is open up the 90 per cent of the data estate which used to be dark”, Eswaran said.
Jalil described AI as a “brain” that cannot operate without access to such data. Within businesses, however, access is rarely uniform.
“Even within a company, two separate people may not be able to see the same data”, he said. “CFO’s information is not visible to your customer support person”.
Scaling AI therefore requires what he dubbed as “the labyrinth… of controls, visibility and controls that you need and then automate.”
These issues are particularly acute in financial services, where AI adoption is accelerating at the same time as regulatory scrutiny is tightening.
Earlier this month, a House of Commons Treasury Committee report warned that UK regulators including the FCA and the Bank of England were relying too heavily on existing frameworks as AI becomes more embedded in banking and insurance.
The committee claimed that waiting to see how AI develops could risk exposing consumers and the financial system to harm, and called for clearer guidance on accountability when AI systems cause damage.
Growth and guardrails
However, it seems like none of this has slowed investment.
Data centre applications hit record highs in 2025, with more than 60 new proposals filed across England and Wales, with investors competing for powered land to support AI workloads.
However, grid connection delays of eight to ten years in parts of London and the South East show that infrastructure is also a constraint.
Polling shows that frequent AI users are more positive about the technology, with only 26 per cent of weekly users in the Ipsos data seeing AI as a societal risk, compared with 56 per cent of non-users.
And as automation becomes central to business growth in the UK and globally, being able to trace decisions and recover from cyber errors becomes central.
“There is no AI without data security,” Eswaran added, “and there is no trust in AI without data resilience”.