Financial institutions should consider appointing AI officers
Financial Services Bill: Time to Act
Crypto AM Parliamentary Special, with Lord Holmes of Richmond MBE
Part Two of Five
I have pleasure in penning Day Two’s thoughts on the Financial Services Bill, currently making its way through the House of Lords.
Today, I want to set out my hopes for the bill in relation to Artificial Intelligence [AI].
AI is already pervasive across so many aspects of our lives, including financial services. AI will be key in how the financial industry operates and delivers services, and for it to compete and thrive.
According to consulting firm Accenture, the contribution of AI and augmented technologies to the bottom line for financial services companies around the world is estimated at $140 billion in productivity gains and cost savings by 2025.
Contrast that with surveys which estimate that around 0.8% of global GDP disappears in fraud in financial services. AI has a real and real-time role here. So, potentially, plus, plus.
Much more than fraud detection, it is clear that there will also be an increasing use of AI in the customer experience, Bots and Robochatterers, in the deal process, not least in terms of due diligence and, generally, in addressing the perennial problem of false positives.
I have put down two amendments, probing the government’s position on the deployment of AI across this sector.
First, I want to explore the possibilities of an AI officer within financial institutions. Various AI is already embedded in financial services across the piece: retail, investment, wholesale and beyond. If it is accepted that an anti-money laundering officer (AMLO) is necessary then I would suggest that an artificial intelligence officer (AIO) is at least so.
Responsibility
The AIO would have responsibility to ensure the use of AI is:
(a) safe,
(b) fair,
(c) unbiased, and
(d) non-discriminatory,
AI is still almost entirely a product of the data it is fed. If that data has the bias baked in then the resultant customer experience, investment advice or credit refusal will simply reflect that bias, or, indeed, accentuate or multiply it.
We know that so many data sets are biased. Numerous examples have horrifically brought this home to any doubters of this objective deficit.
Consider just one; the US soap dispenser case, trained only on white hands, refusing, as a consequence, any soap to any hands which did not fit this learned material. Truly shocking, but it happened.
Though AI can learn – and improve – it still can’t make judgment calls. Humans can take individual circumstances into account when making decisions, something that AI might never be able to do. We must, not only encourage ethical approaches to the auditing of datasets, but, also ensure greater diversity in the training and recruitment of AI specialists.
My second amendment builds on this, going to the essential questions around ethical use of artificial intelligence by companies in the financial services sector.
Worth considering
To this end, I think it is worth considering an increased role for the Centre for Data Ethics and Innovation (CDEI). At present, the CDEI sits within the Department of Digital Culture Media and Sport (DCMS).
With the critical importance of both ethics and innovation, my amendment aims to probe whether the CDEI should be established as fully independent – perhaps with regulatory powers? This might well be a positive step forward.
In the introduction to our 2018 Lord’s select committee report on AI, we noted that the UK is in a strong position to be a world leader in the development of AI. This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come.
The best way to do this is to put ethics at the centre of AI’s development and use. True then, true today.
Our Chairman, Lord Clement-Jones said: “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”
It is critical that everyone gets it, that AI is not without its risks, like all technologies, all tools, far from it. An ethical approach though will enable the public debate, the discussion and the best means of securing public engagement, trust and confidence in the uses and benefits of it.
Five point code
Our committee suggested a five point code which could get us towards this ethical use:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
In financial services concepts such as intelligibility, accountability, transparency and fairness are utterly essential. Your mortgage, your money, your access to credit – your entire financial self is in this balance.
We must ensure these principles apply to the technology we apply within financial services as well. A potentially helpful concept is that of algorithmic accountability, which rests on the core principle that the operators of the algorithm should put in place sufficient controls to make sure the algorithm performs as expected.
Explainable AI[XAI] is a broad term which covers systems and tools to increase the transparency of the AI decision making process to humans. The major benefit of this approach is that it provides insights into the data, variables and decision points used to make a recommendation.
Significant increase
In the past year, we have seen a significant increase in the adoption of XAI, with Google, Microsoft and other large technology players creating such systems.
However, challenges remain. Does ‘explainability’ compromise accuracy? Is a firm’s IP compromised if the ‘push’ of XAI adoption is over zealous?
I will be putting forward my AI amendments during the debate on Wednesday and in the hope that these questions and issues are fully explored and considered. AI is such a powerful tool, not least in financial services but it is all our responsibility to ensure that power is deployed for economic, social, psychological benefits and the overall public good.
Missed part one? Read it here.
___________________________________
Lord Chris Holmes is Vice Chair of the Parliamentary Groups on: FinTech, AI, Block Chain and 4IR. He has co-authored Lords Select Committee reports on: Digital Skills, Social Mobility, Financial Inclusion, AI, Intergenerational Fairness and, last year, Democracy and Digital Technologies. He also authored a report on ‘Distributed Ledger Technology for public good: leadership, collaboration, innovation.’
Further detail about amendments to the Financial Services Bill can be found on Chris’s Blog: https://lordchrisholmes.com/
Website:https://Chrisholmes.co.uk
LinkedIn: https://www.linkedin.com/in/lord-chris-holmes/
Twitter: @lordchrisholmes