The future, or a fad? Shameek Kundu, Head of Financial Services at TruEra, looks at whether AI’s credibility gap could hold back the banking industry.
Artificial Intelligence (AI) is widely seen as key to the banking industry’s transformation. Industry surveys, including one from the Bank of England, suggest that two in every three financial institutions have adopted AI in some form. In most banks, neither the 2020 budget restrictions nor the failure of some AI systems during COVID-19 appear to have slowed down AI-related recruitment or technology spending.
But is the reality of AI adoption in banking living up to the hype? Are banks genuinely transforming themselves using AI? Talk to bank data science teams, and a nuanced story emerges. AI has graduated from its ‘shiny new toy’ status into real-life use cases. However, most claims of new AI-enabled business models are exaggerated, either in their ‘newness’ or in the role that AI plays in enabling them. In most banks, AI adoption has led to modest improvements in efficiency and risk management, but has not yet been transformational in any meaningful sense.
For example, while many banks have used AI systems to improve the monitoring of loan performance or speed up loan application processing, actual lending decisions continue to be largely based on traditional scorecard-based lending models, instead of AI-based ones. Similarly, in financial crime, AI applications have been successfully used to automate the process of investigating ‘false positive’ alerts generated by traditional anti-money laundering and sanction monitoring systems, but not yet to replace those systems.
This apparent gap, between the hype and investment around AI, and its actual impact on the ground, should worry the industry, for two reasons. First, AI was seen as the answer to some of the banking industry’s biggest problems – improving financial inclusion, making the customer experience less painful and making compliance less onerous and more effective. Those problems are not going away, or becoming less critical to solve. If AI were to fail in its promise, the industry will still need to find a way of addressing them.
Second, while most traditional banks may be struggling to ensure successful adoption, their fintech/ big-tech competitors are often starting at the other end, embracing alternative data and algorithmic decision-making by default. Many of them lack the risk management DNA of traditional banks. As a result, we may see ‘wild west’ behaviour on the fringes of a regulated industry, as with the UK payday loan scandal or with the complex derivatives that triggered the 2008 financial crisis.
Several regulators, including those in Singapore, Hong Kong, the European Union, the United Kingdom and the United States, have recognised the importance of encouraging responsibility and innovation in AI. However, it will ultimately be up to banks – and their tech-savvy competitors – to overcome AI’s credibility gap.
A small number of traditional and challenger banks are showing the way, by getting four things right. First, they are explicitly recognising the challenges posed by wide-spread adoption of algorithmic decision-making – such as lack of transparency or tendency to accentuate unfair bias. Second, they are investing significant effort in demystifying AI, by making senior management, frontline staff and customers more data and AI literate. Not everyone has to become a data scientist, but we must learn enough to ask the right questions. Third, they are enhancing existing risk frameworks to create safe guard-rails for AI adoption – for example, by including explicit steps to understand and monitor AI quality. Finally, they have recognised that attempting to scale AI adoption purely on the back of good intent and manual processes is wishful thinking. They are introducing appropriate technology support to make the process of training, testing, deploying and monitoring models “less art and more science”.
AI was seen as a critical part of incumbent banks’ response to threats from technology-focused challengers. Without urgent action to enhance its trustworthiness, much of the industry faces its own ‘AI winter’.