Why we need to stop countries developing killer military robots before it’s too late
The prospect of human-killing robots walking among us on our streets is a very scary one, and while it was once stuck firmly in the realms of science fiction, it is now becoming a distinct possibility.
It's no wonder, then, that thousands of scientists and artificial intelligence researchers signed an open letter calling for them to be banned in July this year. Lethal autonomous weapons systems, as these killer robots are known scientifically, are being made and tested by several hi-tech nations such as US, UK, China, Russia, Israel and the Republic of Korea. They look like futuristic fighter jets, tanks, ships and submarines, and they can be sent to find targets and kill them without human supervision.
But this is a bad idea. No one can guarantee that these weapons will comply with the international laws governing conflict, and there are far too many circumstances in which a machine would not be able to tell the difference between a civilian and a combatant, especially in the fog of war. Plus, even if they did have perfect targeting abilities, would we really want to delegate the decision of whom to kill to machines?
It would be disastrous to let the human decision about the proportionate use of force be allocated to a robot. How many innocent human lives is it worth risking to find and kill one combatant? It is a question that exercises the best human judgment of a very experienced commander.
So why exactly are countries developing these forms of artificial intelligence? People who promote robot weapons argue that armed conflict is becoming too fast and too complex for humans. They say they can multiply their forces by releasing swarms of these robots from land, sea and air.
Some also believe robots could save many of our soldiers' lives, but that is a short-sighted view. In the past, long range guns and missiles kept our young fighters out of harm’s way, but once everyone had them there was just more death and destruction than before. It will not be any different with autonomous robot weapons.
We must consider global security. Robot weapons would give a nation only a very short-term military edge at best, because an arms race would ensue and enemies would soon catch up. The consequences of the mass proliferation of many forms of robot weapons are unimaginable.
It would not be possible to adequately defend our populations from swarms of autonomous hypersonic weapons. When opponent robot swarms meet, nobody knows what will happen and what will be the cost in terms of civilian lives. High-speed wars triggered accidentally would have a devastating impact and may be impossible to terminate.
Then there is the problem of rogue nations and non-state actors acquiring robot weapons that could carry out programmed orders coldly and without question. Dictators would no longer have to face the problems of getting their armies to fire on their own people. Only a bunch of backroom techies would be required to program the machines. This has to be nipped in the bud, and fortunately it can be.
The Campaign to Step Killer Robots is a determined international coalition of 56 non-governmental organisations that has created intense discussion at the United Nations over the last two years.
They are calling for an international legally binding treaty on the development, production and use of autonomous weapons systems. We need to stigmatise them and prevent their legal export to powerful dictators and rogue groups. This is the only way to prevent an arms race and stem the tide of automating warfare.
The campaign wants to ban robot weapons before they are used and before billions of dollars have been spent on their manufacture. Otherwise it may be too late.