What can science fiction tell us about the future?
A recent book on the future of warfare warned that Britain’s pristine new aircraft carriers will be vulnerable to swarms of small, semi-autonomous drones that could not be stopped by conventional protective weaponry. Another looked at whether mass online gaming might be used to recruit networks to attack military or commercial targets. A third raised the prospect of VIPs being endlessly stalked by insect-sized camera drones.
In different ways, each could be catastrophic for our security. But if you are wondering how you missed them, it might be because you were looking in the wrong section at your bookshop. For these warnings, in truth not about Britain specifically, were not contained in books on politics or international relations. Rather, they came in recent works by science fiction novelists – in these cases, Daniel Suarez and William Gibson.
As the recent Chilcot Report into the lessons we must learn from the conflict in Iraq suggested, governments spend a lot of time thinking about the battles that have just been fought and how to rectify the mistakes previously made. What is true in government is true in business too: we tend to look backward, not forward. Thinking about entirely new threats that might arise in the future is difficult and therefore rare.
This is why, in many ways, works of science fiction are so useful for those engaged in horizon scanning exercises to think about future challenges in the fields of foreign and security policy, economics and business. The best and most sophisticated writers of science fiction spend their lives talking to scientists and technologists, looking for the idea that might, just might, plausibly change the world as we know it. Equally importantly, these writers’ ability to turn often highly complex material into stories that can be understood and enjoyed by masses of people helps us to confront the plausible impact on people’s lives and livelihoods.
Read more: Will robots wreck capitalism? History says no but this time may be different
When Gibson came up with the concept of “cyberspace” in his masterpiece Neuromancer, he forced us to consider a place where vast numbers of computers across the world were linked up in a network, offering opportunities for criminals and chancers that wished to exploit it. When Suarez described the mass use of semi-autonomous drones, he made us think about a world in which drones do more than just take photographs or deliver packages to our houses.
But while serious science fiction has its uses, nothing irritates those of us that work in the field of technology more than bad science fiction and its prophecies of an end to the world as we know it or ultimate, appalling doom. Watch any 1960s film that depicts what life will be like “in the year 2000” and you will see the dangers of paying too much heed to such visions of the future.
In my organisation’s area of expertise – data science and Artificial Intelligence (AI) – people jump all too quickly to the most outlandish sci-fi apocalyptic scenarios. This is unhelpful, and a bad guide to decision-making or policy development. Unlike the sort of serious science fiction I refer to above, there is a tendency to “over-anthropomorphise” the threat – always thinking of robots in human form. This completely clouds our ability to think realistically about the nature of the opportunities and challenges that arise from AI.
AI is a powerful tool, and will become ever more so. And, like any tool, it can be used for positive or negative means. Take autonomous vehicles as an example; developments could lead to both autonomous cars and autonomous weaponised drones. The incredible upside – a dramatic reduction in transport accidents, the third largest cause of death for people under the age of 35 – must be considered alongside the harm that could be caused by these weapons.
Read more: Robotics: Japan’s ¥9.7 trillion opportunity
But this isn’t the first time that humanity has faced this conundrum. Over the course of our history, we have made a great many discoveries that have led to both harm and good (think about even a trivially simple technology likes knives) and we have a reasonably well developed strategy to manage them. Through a series of technological, cultural and political interventions, we try to benefit from the upsides while mitigating the downsides.
The modern complication is that the rate of development of technology is getting ever faster. Are our politicians and institutions sufficiently responsive to be able to build policy that lets us capture only the best parts of the inevitable revolutions of the future? The truth is, I don’t know, but the more familiar our politicians and business leaders are with the potential scenarios, inspired by sci-fi and understood through rational analysis, the more we’ll be able to have reasoned debate.