How to stop worrying about being run over by a robot car
Depending on who you ask, we are either very close or very far from having millions of autonomous vehicles on the roads.
What few question is that we will eventually see widespread take-up; the opportunities for improved road safety are unparalleled.
But driverless cars also offer a great exemplar of the wider automation debate, and the same question keeps cropping up: how will a robot car act in a crash?
One reason this captures the public imagination is that it offers a real-life application of the ethical thought experiment known as the “trolley problem”. This decades-old philosophy exercise forces individuals to choose whom to save in the event of a crash.
Massachusetts Institute of Technology has been using an updated version of this problem to run a study called “Moral Machine”, which asks participants to choose between different driverless car scenarios. It has found enormous differences between various countries in terms of who people save – even when only hypothetically.
Debates like this are revealing, even fun, and tell us a lot about discrete moral and ethical views. But where automation is concerned they are a red herring – and they may delay adoption.
It is right that we hold machines to far higher safety standards than humans, and even more imperative to demand that developers of products like autonomous vehicles can deliver those standards before allowing widespread adoption. But it’s easy to get tied up in ethical thought experiments when the reality is that no one is capable of deciding who “should” get hit by a car.
Part of the problem is that – being human – we tend to think of automation as fundamentally android. But automated cars are not “robot drivers”.
A useful comparison might be image-recognition software. Unlike humans, an algorithm identifying pictures of dogs, say, is not comparing them with mentally-stored ideas of “dog-ness”.
Instead, it is evaluating patterns, edges, light and texture – in ways that humans don’t. Its thought process is entirely different, and therefore so are the potential errors it is likely to make.
Human drivers are vastly more error-prone than machines, but are equipped to anticipate problems; any human would recognise that an unsupervised toddler or a distracted phone user should prompt extra care.
One of the many benefits of automation is that machines are free of cognitive biases, but they also cannot make moral and ethical judgements. Just as a machine can’t be bigoted or selfish, neither can it be brave, altruistic, or merciful.
What this means in practice is that these sorts of ethical question become moot. But the issues they present do not.
So how can we deliver public safety in the machine age?
The answer lies in changing the decision-making framework we use. This is where the assurance process, central to the audit and assurance profession, can offer solutions.
The key questions we need to ask are: what are we trying to achieve? What can we control? Who needs to take responsibility?
In the case of driverless cars, we are trying to reduce accidents. We cannot adequately control the actions of individual vehicles and – as the recent legal case in Arizona demonstrates – there are unresolved questions around liability. Nor is it practical to focus on actions at the individual level if we anticipate widespread adoption.
What we can control is environment. Instead of trying to fine-tune vehicles, we can design urban public space in such a way as to minimise and mitigate the potential for the kind of risks robot rather than human drivers are likely to cause and encounter, and then only use entirely driverless cars within suitable spaces.
This also answers the question of accountability. It is unfair and unfeasible to hold anyone culpable for things that they cannot control, but once we have established who has responsibility for the environment this is resolved.
In time, this will probably mean that, in order to fully adopt driverless technology, we need to radically redesign urban environments. But the payoffs will be proportionately large.
Once we accept that decisions like this are not technology problems or philosophy conundrums but broad social and business issues, using an assurance framework makes much more sense. Assurance is all about making measured decisions about judgements in complex situations.
It is also about establishing controls, frameworks, and chains of accountability. This approach could prove applicable across many spheres of automation – what is needed is to take a step back and ask the right questions.
The future may well mean a new reliance on robots in nearly every sphere of life. In order to do this, we must be absolutely certain that we can trust them. Using assurance frameworks, we have the opportunity to build this in from the very beginning, but only if we start from the right place.