Artificial intelligence could be London’s next big industry, but only if we build the trust necessary to develop the tech, writes Michael Mainelli
Despite AI already pervading so much of our everyday lives – from the facial recognition on our phones when we go to buy a product, to the banking checks that verify the purchase and, if you’re low on luck, the chatbox you speak to when the parcel doesn’t arrive – public opinion on this quizzical technology remains decidedly mixed.
As the Office for National Statistics found last year, over a third of adults do not think AI can have a positive impact on their lives, with the reasons given ranging from suspicion to irrelevance. A significant proportion of respondents felt they had only limited knowledge of the technology, while the 43 per cent who were more upbeat about this brave new world still thought it brought benefits and risks in equal measure.
Whatever our views, the direction of travel is already, and irreversibly, set: the genie is out of the bottle. In fact, despite only being in the foothills of development, the sector already contributes £3.7bn to the UK economy, with 3,000 AI firms based around our isles, employing some 50,000 people in total.
PwC has predicted that UK GDP could be up to 10.3 per cent higher in 2030 because of AI – the equivalent of an additional £23bn. And that’s before you even consider the near limitless opportunities across the health, education, justice and FPS sectors, something our recent report with EY, ‘AI: Accelerating Innovation’, explored further. These are benefits we do not want to miss the boat on.
If we are to meet that potential, harnessing it in a way that is safe, transparent, accountable and doesn’t reinforce inequalities, then we need to reduce the inhibitions that deter people from interacting with such beneficial technology in the first place. That requires trust that AI is being developed and used in a responsible and ethical manner.
The City of London has form here. It’s an often-forgotten fact that the word “hallmark” – an official stamp certifying purity to confer trust – originates from Goldsmith’s Hall, one of the Square Mile’s hidden gems. And like our forebearers in the guilds of medieval London, we too need to find a way of offering confidence in the quality of a burgeoning product.
The standards set by the International Organisation for Standardisation (ISO) – an independent, non-governmental international organisation with a membership of 170 national standards bodies – is the modern-day equivalent. There are already three ISO AI standards covering terminology, ethics and AI-related risk management. And building on this work, the 695th Lord Mayor’s Ethical AI Initiative, launched last summer, is promoting community standards to provide trust and ensure good practice.
That involves professional certification, with an online ethics course being led by the Chartered Institute for Securities and Investment for those working in the FPS sector, and for technical professionals, an Ethical Build of AI Certification run by the British Computer Society.
In just twelve weeks, over 3,000 professionals from more than 240 organisations in 53 countries have registered, with over 200 graduates. And on 27th March, we’re holding a summit at Mansion House focused on firmwide certification too. Indeed, given its success, we’re already looking at extending the course to the C-suite and other professions – like law, surveying, accounting and pharmaceuticals.
In the Square Mile we have access to incredible talent and unrivalled global connections – just two reasons why Anthropic and OpenAI, two behemoths of the AI world, both opened offices in London last year. Leveraging our convening power and the knowledge pools we have in abundance, we can ensure we’re not passive recipients of AI, but instead lead the international development of standards and help shape its trajectory for years to come.