Financial services are not prepared for AI risks
The Bank of England, the FCA and the Treasury must be proactive in protecting the system against this rapidly developing technology’s potential worst-case scenarios. If not, they are exposing the stability of the UK’s financial system to significant risks, says Meg Hillier
It feels like discussion and debates about the AI revolution have reached fever pitch in recent years. Although many would argue AI has taken many forms which we already use without realising, we’re told constantly by the government and the richest people in the world that the newer forms of the technology will soon have a transformative effect on society.
Our willingness to constantly innovate and pursue technological advancement in this country is one reason we have always punched above our weight on the global stage. And the City plays a hugely important role in that. Financial firms need to be allowed to innovate but there is an important question which must be considered simultaneously: do we understand the risks as well as the potential benefits of this much-celebrated imminent AI revolution?
The Treasury Select Committee, which I chair, decided to launch an inquiry into the use of AI in financial services for three key reasons. We wanted to understand who was using it, what they were using it for and, crucially, what guardrails are already in place if a bug in the system leads to an AI-triggered failure.
We found that the UK’s financial services sector has been one of the fastest to embrace the use of AI, with three-quarters of firms now using it in some form. Some examples can be found at insurance companies and banks, which have integrated the technology to carry out work such as credit checks and processing insurance claims. This is important.
Financial businesses have moved past the point of using AI simply to carry out administrative tasks more quickly. The technology is now being used to analyse data and lead people towards financial decisions. It would be logical to expect this to expand as understanding grows, with firms of all sizes keen to see how new AI tools can benefit them.
It’s widely known that AI tools can hallucinate, which presents new vulnerabilities that could lead businesses into choppy waters. And taking a step back, as AI providers become more embedded in the financial network, a widespread outage could trigger a systemic failure with far-reaching consequences.
Do we have the right guardrails?
Which takes me to the next point we wanted to understand. Do we have the right guardrails in place to protect us against a major failure? I’m afraid, worryingly, the conclusion we came to in our report is that we don’t. I do not feel confident that our financial system is prepared for a major AI-related incident. This needs to be addressed.
My Committee has made some recommendations on how we can build more resilience into the system. For a start, the Bank of England and FCA should be running AI-specific stress-tests. This new technology presents new potential vulnerabilities and it’s critical we understand whether firms can withstand a widespread AI outage.
The regulator can also certainly do more for businesses, in our view, by sharing clear guidance on how using AI interacts with existing consumer protection rules as well as pinning down clearer principles on accountability so businesses know where the buck stops within their organisations if things go wrong with this technology.
Late last year, we saw a major outage in Amazon Web Services’ systems temporarily halt parts of massive financial organisations including HMRC and Lloyds Banking Group. And it’s very common for other IT failures to cause outages at individual banks. This really brings home how fragile our economic system is and how reliant it has become on a handful of tech companies.
The government already has the powers to better protect us against outages, via the Critical Third Parties Regime, but is dragging its feet. A year on, we are still yet to see a single company designated under the scheme and this needs to change. Doing so will give the Bank of England and FCA much-needed extra powers over the tech firms which have become embedded in our financial infrastructure.
AI is here to stay. Technological progress opens doors, and our fantastic financial services sector is right to look at how it can be harnessed to give the City an edge. But the Bank of England, the FCA and the Treasury must be proactive in protecting the system against this rapidly developing technology’s potential worst-case scenarios. If not, they are exposing the stability of the UK’s financial system to significant risks.
Dame Meg Hillier is chair of the Treasury select committee