Malevolent superintelligence? That’s science fiction, not reality
In Stanley Kubrick’s 2001: A Space Odyssey, HAL 9000 was an all-knowing robot that decided to destroy a spacecraft and its crew. Naturally enough, the crew didn’t care for this.
The film was an important crossover moment from science fiction to science fact. Many who saw HAL came to believe that there are indeed such computers in charge of things.
What was once the stuff of fiction is now closer to reality, and the question of whether artificial intelligence (AI) will cause us harm has become more important than ever.
After all, automated cars will soon take to our roads, and drivers may begin to worry that the family saloon might decide to kill them in order to save a coachload of schoolgirls.
Fears around AI have been stoked by the concerns of some public figures. Stephen Hawking said that the emergence of AI could be the “worst event in the history of our civilization”, as it may try to replace humans.
Meanwhile Nick Bostrom, a philosopher at Oxford and author of the best-selling book Superintelligence, thinks that AI could create an “existential catastrophe” for us all.
Released in 2014, Superintelligence scared a huge number of people by describing AI research as being at the cusp of suddenly taking over and controlling our lives – or at worst, destroying us all.
This sounds compelling, because it panders to our love of catastrophe as our lives become ever more comfortable and less arduous. It also plays into a cultural moment for AI, with several TV and film dramas featuring the technology.
But there’s no reason to be afraid. AI developments are hyped up to appear far more intelligent than they really are.
It is true that the technology is blooming everywhere, from health to space to agriculture, but there is no hint as yet of the kind of general intelligence that we humans have and which can create these marvels.
Already, AI can have capabilities more like those of animals and far superior to our own: AI-powered eyes, like those of an eagle, can see a stamp a mile away. But just as an eagle poses no risk to us, neither are these tech developments a threat to humans.
Bostrom also believes that intelligent machines will have goals of their own and will be hostile to us, but he is reduced to describing absurd scenarios such as a machine wanting to make an infinite number of paperclips and destroying all the world’s resources to do so. This is very early science fiction indeed – it is straight out of The Sorcerer’s Apprentice, a poem from 1797.
One of the odd things about Bostrom’s menacing AI prediction is that he assumes there will be only one of them, because it will have killed all the others – a bit like the dragons fighting it out with the zombified dragon in Game of Thrones.
But this assumes that superintelligent computers never evolve towards the same thing that humans acquired in spades thanks to evolution: the ability to cooperate and work together.
Also, why assume that any creation would want to destroy its creator?
All the myths and religions we know assume that created things are more or less well-disposed towards their creators, and much more likely to worship them and build temples than to seek to kill them. In Bostrom’s science fiction, the Superintelligences want to kill their parents.
Of course, AI could be dangerous in the future – extraordinary weapons are now being created by all the major powers, from submarines with no crews to huge flocks of heavily-armed drones and tiny tanks.
The AI that we need to be most careful of is the one that extends already-existing technologies – like weapons, cars, and fully-automatic factories – and makes them much cheaper, yet also more deadly.
But AI rebelling against us? That’s just science fiction.