They're terrifying machines – capable of operating without human control and built with a vicious streak that could steal the lives of thousands of humans in a split second.
As artificial intelligence develops and improves, who's to say the killer robots we put together with our own hands won't one day cause serious devastation to the human race and maybe even turn against us?
No one, and that's why the United Nations (UN) is holding a conference in Geneva this week to determine the future role of robots in warfare. For a few days, famine, inequality and disease are taking a back seat to make room for this issue, which will be discussed by experts from the Foreign Office and the Ministry of Defence.
Questions to be tackled include whether the robots should be banned, and if not, the extent to which human control is necessary. “You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control,” Michael Møller, the top UN official involved, said to those taking part.
There are already some unmanned combat vehicles in use, such as drones and the Phalax gun system used by the US Navy to automatically engage incoming threats. In Israel, the Harpy "fire-and-forget" aerial vehicle seeks out and destroys radar installations. It was given this name because, once it has been launched, it surveys the area and can make a decision about whether to fire on its own, allowing the humans who fired it in the first place to forget about it.
The benefits of giving artificial intelligence a bigger role in the military are plentiful – on the front line, machines, rather than human lives, are put at risk (in the case of the attacker, at least), and the potential damage inflicted by a highly efficient and powerful machine far exceeds that inflicted by a human.
But there are also clear risks, and some would argue they are far greater than any possible advantages. If a robot is programmed incorrectly, for example, it could result in thousands of human lives being ended by accident, or in the unintended destruction of huge amounts of expensive infrastructure.
Already we have observed examples of this – the human rights group Reprieve recently carried out a study into the accuracy of drone strikes and showed that they often kill many more people than they intend to. As of 24 November, attempts to kill 41 men by the US military had resulted in the deaths of around 1,147 people, according to a report by The Guardian.
The issue has captured the attention of groups all over the world, such as the alliance of human rights groups that make up the Campaign to Stop the Killer Robots. On its website it condemns their development, saying:
Giving machines the power to decide who lives and dies on the battlefield is an unacceptable application of technology. Human control of any combat robot is essential to ensuring both humanitarian protection and effective legal control. A comprehensive, pre-emptive prohibition on fully autonomous weapons is urgently needed.
Some countries, such as Japan and Croatia, also agree. Japan’s Ministry of Defence says it “has no plan to develop robots with humans out of the loop, which may be capable of committing murder.”
The UK, interestingly, opposes the ban. The Foreign Office told The Guardian that it believes international humanitarian law already provides sufficient regulation for this area.
“At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area,” it said.
“The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems.”
So what does the future have in store for terrifying killer robots? We will know more at the end of the week.