FCA doubles down on AI testing over regulation
The UK’s financial regulator has doubled down on its decision not to draft new AI-specific laws, insisting that the best way to manage the tech’s momentous rise is through testing and collaboration rather than rushing to legislate.
Speaking at the City & Financial Global AI regulation summit, Charlotte Clark CBE, the Financial Conduct Authority’s director of cross-cutting policy and strategy, said the FCA’s goal was to “enable safe and responsible adoption of AI in financial services”.
She acknowledged the technology’s potential to boost growth, yet warned that innovation could not come at the expense of public trust.
“Our approach is principles-based and outcomes-focused,” Clark said. “That shift allows us to be more agile and not have to rewrite rules every time a new technology emerges.”
The FCA’s message stance sits on existing frameworks, including the consumer duty and senior managers regime, already providing the structure needed to govern AI use across the sector.
“We will continue to rely on these frameworks,” she added. “We want to avoid introducing additional legislation.”
Principles over prescriptions
The stance distinguishes the FCA from regulators in Brussels and Washington.
While the EU’s AI act sets out detailed risk classifications for AI systems and the US has begun issuing sector-specific guidance through agencies like the SEC, the FCA’s post-Brexit approach favours flexibility instead.
Clark said that moving away from the EU’s rule book had given the UK an opportunity to “design what works for the UK”, and focus on outcomes rather than rigid compliance.
That approach, she argued, helps balance regulatory certainty with the freedom to innovate.
“Firms want certainty,” Clark added, ‘but what matters most is that they own the outcomes, and the impact on the consumer.”
To strike that balance, the FCA is investing heavily in so-called “safe spaces to experiment”.
The regulator has expanded its sandbox model, creating controlled environments where firms can test and validate AI systems under close supervision.
“Firms have told us they need environments like that to test and validate AI solutions,” Clark said. “Creating those spaces helps them develop and validate cutting-edge tools faster, and it helps us understand the practical challenges they face.”
The FCA’s ‘supercharged sandbox’ offers early-stage firms access to data, computing power and regulatory guidance, while its ‘AI live testing’ scheme allows more mature systems to be trialled in real-world conditions.
The first cohorts are under way, with insights expected to feed directly into policy work with the Bank of England (BoE) and the Information Commissioner’s Office (ICO).
Building trust
The ICO’s own message at the summit doubled down on the FCA’s emphasis on collaboration.
William Malcolm, the ICO’s executive director, said privacy and innovation required careful alignment, rather than opposition.
“Privacy and innovation aren’t in tension, they only look that way when we forget the basics,” he told the audience.
Malcolm said the ICO’s AI strategy was designed to maintain public confidence while allowing companies to innovate responsibly.
The regulator’s new innovation advice service promises responses to businesses within a fortnight, while its own sandbox offers a fast route for firms to test complex models with regulatory oversight.
“The only way we’ll achieve scale is by cooperating in these sandboxes and engaging with the hard problems,” Malcolm said.
The government is also attempting to accelerate that collaboration through its planned ‘AI growth labs’, which are designed to bring together regulators, researchers and industry leaders to speed up responsible AI development.
With Brussels pressing ahead with the AI Act and Washington’s agencies taking a more interventionist stance, the UK’s light-touch, agile model is both a selling point and a test.
But it seems that the UK’s bet is that agility, not regulation, will keep its AI ecosystem ahead.
“Clark was clear that the FCA’s role was not to slow innovation but to guide it safely. “We must keep learning and adapting,” she said. “If you think there’s more we can do, tell us. We’re open to ideas.”