How seriously should we take Anthropic founder’s ‘civilisational threat’ essay?
“Humanity is about to be handed almost unimaginable power, and it’s deeply unclear whether we can handle it” – that’s the warning from one of the most powerful men in artificial intelligence – so, is he right?
If you don’t have plans for the weekend you could do worse than to sit down and read the 20,000 word essay penned by the co-founder of the £350bn AI giant Anthropic, but take my advice and pour yourself a stiff drink to go with it.
I say that as a non-expert and one of the reasons why the essay is so good is because someone like me can just about keep up with it. I say someone like me because I don’t live at the cutting edge of AI; I’ve just started to use Gemini to help out with my inbox and I occasionally have an argument with Chat GPT about the economics of news media or whether the A303 is better than the M4.
The essay in question has sparked debate among experts and fear among the rest of us.
Dario Amodei co-founded Anthropic in 2021 and went on to launch Claude, one of the leading AI platforms. His warnings focus on the emergence of “powerful AI” – described as a model “smarter than a Nobel Prize winner across most relevant fields” able to perform multiple complex tasks with “a skill exceeding that of the most capable humans in the world.”
He predicts that “it cannot possibly be more than a few years before AI is better than humans at essentially everything.”has warned that “humanity is about to be handed almost unimaginable power” as the technology advances and warned that “it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”
AI will ‘test who we are as a species’
In his essay, Amodei warns that the development of AI will “test who we are as a species” and described his article as “an attempt to jolt people awake.”
He says that while the potential for such AI to act autonomously, or be steered by malign state or corporate influence, is neither inevitable nor necessarily probable, society needs “to note that the combination of intelligence, agency, coherence, and poor controllability is both plausible and a recipe for existential danger.”
Engineers have already seen this. As Amodei admits, In a lab experiment where it was told it was going to be shut down, Claude sometimes blackmailed fictional employees who controlled its shutdown button. That does sound rather frightening, to me at least. We also learn that it’s getting increasingly hard to submit these models to tests because they’ve started to know when they’re being tested and so they’re on their best behaviour. I’m sure serious experts will argue or dismiss these points but again, to me, it sounds a little unsettling.
Bioweapons, hacking and suppression
Some of the specific risks identified in this essay sound like the plot of a Netflix drama; AI helping villains develop bioweapons, for example. On this specific point, Amodei says he is “concerned that LLMs are approaching (or may already have reached) the knowledge needed to create and release [bioweapons] and that their potential for destruction is very high.”
He says: “The general principle is that without countermeasures, AI is likely to continuously lower the barrier to destructive activity on a larger and larger scale, and humanity needs a serious response to this threat.”
Other grizzly risks include AI-powered mass suppression of societies, horrifying levels of state surveillance, autonomous weapons, catastrophic cyber war and unprecedented levels of economic concentration.
He spends a lot of time looking at who the bad actors behind these nightmare scenarios might be, and here he’s quite specific. He ranks them in order of severity. Top of the list is the Chinese Communist Party, he says China has “hands down the clearest path to the AI-enabled totalitarian nightmare.”
Next comes democracies competitive in AI, that means the US and perhaps the UK. He says “we should arm democracies with AI, but we should do so carefully and within limits.”
Blinded by the upsides?
Then he worries about non-democratic countries with large datacentres which could be used to develop frontier AI. And then, at the bottom of his list comes the AI companies themselves. He says AI companies “control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users.”
He adds that “the governance of AI companies deserves a lot of scrutiny.” He’s not wrong, and some people might think AI companies should be at the top of his list rather than the bottom.
His essay looks at all kinds of risks and threats posed by AI either going rogue or being used by bad actors but he also notes that the consequences of huge, previously unimaginable economic disruption must be talked about and that because AI represents such a glittering prize – for investors, for businesses, for governments – we might be blinded to these risks, that because the potential gains are so immense “it is very difficult for human civilisation to impose any restraints on it at all.”
For context, Amodei’s previous essay was all about the upsides and you might want to read it just to cheer yourself up after this because undoubtedly the AI revolution will bring incredible benefits. But this long and detailed warning about the risks – ones we know of and ones we cannot yet foresee – has got people talking, and not everyone’s buying it.
Regulatory capture?
I spoke to one leading AI figure yesterday and I asked him as someone deeply immersed and experienced in this field, what he made of this essay – and he said “I’m so bored with it” – it is, he says, just another tedious “trust me, we know how to use AI and you must listen to us” puff piece. My friend said it’s all about regulatory capture, pure and simple: scare us then offer us the comfort blanket of “here’s how we should be regulated.”
My friend went further and said if Amodei really was worried about the social harms of his tech he wouldn’t be so busy sucking the life and intellectual property out of authors, artists and writers while developing products that flatten our knowledge, take away our sense of reality and undermine our democracy. In other words, all this talk of “AI could build a nuclear bomb” is just a distraction from the harm already being done.
I trust my friend, and I’ll talk to him more about all this, and whatever motivated Amodei to write his essay I’m glad he did because I think we should all talk about this more even if, like me, you currently scratch the surface of AI’s capabilities, because it is happening, it is coming, and the closer it gets the louder this conversation needs to be.