The EU’s attempt to regulate artificial intelligence could backfire
The “Wild West” of artificial intelligence (AI) needs to be reined in, according to EU industry commissioner Thierry Breton. However, regulating AI generally, rather than regulating the technology’s uses specifically, may very well be putting the cart before the horse.
Late last month, the European Commission revealed its intention to draft regulation around the use of AI, with the aim of preventing misuse of the technology and ensuring that, in Breton’s words, the “individual and fundamental rights that we cherish in Europe are respected”.
The recently published white paper spells out the need to crack down on “high-risk” use cases. The issue, however, is that it arguably focuses too strongly on the technology involved, in lieu of the actual circumstances in which it is used.
Looking at facial recognition, for example, Dr Michael Veale of University College London argued that the paper “focuses far too heavily on the system in isolation without considering the context… [and] what it’s being used for”.
The issue with regulating the technology itself, as opposed to how it’s used, is that our current understanding of AI is always changing. Future systems will no doubt work with far more granular information, be based on more complex calculations, and perhaps even end up being trained in a completely different way.
We simply can’t know what AI will be capable of tomorrow, for better or worse. So regulation designed around today’s technology could be completely irrelevant to future iterations.
The impact of more advanced AI will also be vastly different, which means that focusing on today’s “high-risk” examples, such as facial recognition, overlooks a wide variety of future dangers that we can’t possibly foresee at this moment in time.
One way to overcome the issues with overly technical regulation would be by setting legal definitions for what are deemed to be safe, fair and appropriate uses of AI, instead of attempting to predict all the possible ways that the technology might be misused.
In such a scenario, we could compare AI to a hammer: the use of a hammer as a versatile tool is not regulated, but using it as a weapon is explicitly forbidden. Who says that similar regulation of AI wouldn’t have the same effect? Making it explicitly illegal to use AI to cause harm of any kind would allow precedents to be set and limits to be agreed as to how it should be used.
Regulations centred on the technology itself will certainly be bypassed, either maliciously or due to technological progress, thus making them redundant. But the rules may result instead in the stifling of innovation and research. For instance, using “biased” training data in a research context to measure the long-term impact would be incredibly valuable, although this may also be prohibited under current proposals.
Regulation, therefore, needs to be prescriptive enough to prohibit the malicious use of AI, yet open enough to promote further innovation and development.
At this critical juncture in our society, at which AI will either help or harm us, adopting appropriate, considered, and relevant regulation will be the deciding factor.
Regulators of today — and tomorrow — the choice is yours.
Main image credit: Getty