Let’s be honest, we shouldn’t take those predicting the AI doomsday too seriously
If you believe AI will destroy humanity, you’re probably wrong. The real challenge is to regulate artificial intelligence safely without stifling innovation and growth, writes Matthew Lesh
“Come with me if you want to live,” the Terminator-turned-ally tells Sarah Connor in the second instalment of the iconic movie franchise. Similarly, AI developers and experts are urging policymakers to follow their regulatory diktats to ensure the survival of humanity.
This week Sam Altman from OpenAI, Demis Hassabis from Google DeepMind and dozens of others put their names to a statement saying “extinction from AI” should be treated as seriously as pandemics and nuclear war. That’s perhaps a mistaken comparison since global policymakers don’t seem to take these risks particularly seriously. Nevertheless, whether it’s killer robots, widespread unemployment, damaging misinformation, or just bias and schoolboy cheating, AI can do no right.
In truth, the challenge for policymakers is to manage actual risks – but to do so without sacrificing considerable benefits from AI.
Despite the narrative of woes, ChatGPT is already providing every writer with a personal research assistant, students with a knowledgeable tutor, and even saving animals’ lives. In March, the chatbot diagnosed a blood condition in a dog missed even by vets. A study released in April found that ChatGPT provides better quality written medical advice and even demonstrates greater empathy than human doctors.
In May, a surgery involving AI allowed a man paralysed in a motorcycle accident to walk again. This is just the beginning. AI could enable mass automation and better economic decision-making, addressing our stagnant productivity and incomes. It could enable countless scientific and medical breakthroughs, like helping new cancer drug discovery.
The technology will undeniably be disruptive and exploited by bad actors – just like the printing press, the Industrial Revolution or the internet. That doesn’t mean we should throw the baby out with the bathwater. We must not base policy on dystopian sci-fi movies. There are a lack of plausible scenarios for a computer system, even one as powerful as AI, to end humanity.
That’s not to say nothing needs to be done. Existing laws must be enforced; developers should build safety features into products and develop self-governance. The UK government’s AI white paper charts a sensible path: emphasising benefits and using existing regulations to tackle harms. It also rejects the creation of a super-AI-regulator, heavy-handed and innovation-stifling like the one proposed by the European Union.
Microsoft recently suggested an AI licensing system involving advanced notification for building AI models, risk assessments, internal and external testing and monitoring. In practice, this would mean many months if not years of regulatory wrangling before something like ChatGPT becomes available, delaying all those benefits above. It would also solidify the powerful incumbents, who would develop cosy relationships with regulators while imposing costs that would keep out smaller start-up innovators.
Others, including Elon Musk, recently called for a pause of all AI development for six months. In truth, policymakers are largely powerless to resist the march of AI. If they tried, the result would be the development of open source underground systems in the West along with giving a free kick to China and Russia.
We are in a geoeconomic race to develop AI for so many uses, from security and defence through to education and healthcare. The UK is well-positioned, with some of the world’s leading researchers based at Google’s DeepMind. The real danger to humanity may not be AI, but those who seek to regulate it out of existence.