Don’t leave it up to the EU to decide how we regulate AI
The war of words between Britain and the EU has begun ahead of next month’s trade talks.
But as Britain sets its own course on everything from immigration to fishing, there is one area where the battle for influence is only just kicking off: the future regulation of artificial intelligence.
As AI becomes a part of our everyday lives — from facial recognition software to the use of “black-box” algorithms — the need for regulation has become more apparent. But around the world, there is rigorous disagreement about how to do it.
Last Wednesday, the EU set out its approach in a white paper, proposing regulations on AI in line with “European values, ethics and rules”. It outlined a tough legal regime, including pre-vetting and human oversight, for high-risk AI applications in sectors such as medicine and a voluntary labelling scheme for the rest.
In contrast, across the Atlantic, Donald Trump’s White House has so far taken a light-touch approach, publishing 10 principles for public bodies designed to ensure that regulation of AI doesn’t “needlessly” get in the way of innovation.
Britain has still to set out its own approach, and we must not be too late to the party. If we are, we may lose the opportunity to influence the shaping of rules that will impact our own industry for decades to come.
This matters, because AI firms — the growth generators of the future — can choose where to locate and which market to target, and will do so partly based on the regulations which apply there.
Put simply, the regulation of AI is too important for Britain’s future prosperity to leave it up to the EU or anyone else.
That doesn’t mean a race to the bottom. Regulation is meaningless if it is so lax that it doesn’t prevent harm. But if we get it right, Britain will be able to maintain its position as the technology capital of Europe, as well as setting thoughtful standards that guide the rest of the western world.
So what should a British approach to AI regulation look like?
It is tempting for our legislators to simply give legal force to some of the many vague ethical codes currently floating around the industry. But the lack of specificity of these codes means that they would result in heavy-handed blanket regulation, which could have a chilling effect on innovation.
Instead, the aim must be to ensure that AI works effectively and safely, while giving companies space to innovate. With that in mind, we have created four principles which we believe a British approach to AI regulation should be designed around.
The first is that regulations should be context-specific. “AI” is not one technology, and it cannot be governed as such. Medical algorithms and recommender algorithms, for example, are likely to both be regulated, but to differing extents because of the impact of the outcomes — the consequences of a diagnostic error are far greater than an algorithm pushing an irrelevant product advert into your social media feed.
Our second principle is that regulation must be precise; it should not be left up to tech companies themselves to interpret.
Fortunately, the latest developments in AI research — including some which we are pioneering at Faculty — allow for analysis of an algorithm’s performance across a range of important dimensions: accuracy (how good is an AI tool at doing its job?); fairness (does it have implicit biases?); privacy (does it leak people’s data?); robustness (does it fail unexpectedly?); and explainability (do we know how it is working?).
Regulators should set out precise thresholds for each of these according to the context in which the AI tool is deployed. For instance, an algorithm which hands out supermarket loyalty points might be measured only on whether it is fair and protects personal data, whereas one making clinical decisions in a hospital would be required to reach better-than-human-average standards in every area.
The third principle is that regulators must balance transparency with trust. For example, they might publish one set of standards for supermarket loyalty programmes, and another for radiology algorithms. Each would be subject to different licensing regimes: a light-touch one for supermarkets, and a much tougher inspection regime for hospitals.
Finally, regulators will need to equip themselves with the skills and know-how needed to design and manage this regime. That means having data scientists and engineers who can look under the bonnet of an AI tool, as well as ethicists and economists. They will also need the powers to investigate any algorithm’s performance.
These four principles offer the basis for a regulatory regime precise enough to be meaningful, nuanced enough to permit innovation, and robust enough to retain public trust.
We believe they offer a pragmatic guide for the UK to chart its own path and lead the debate about the future of the AI industry.
Main image credit: Getty