Ex-Deepmind executive adds to wave of stark AI warnings
A former senior executive at Google’s AI branch has warned that the global economy is heading towards a concerning divide between those who control AI and those displaced by it, arguing that governments are unprepared for the scale of change ahead.
Dex Hunter-Torricke, who previously led communications at Google Deepmind and has worked for both Mark Zuckerberg and Elon Musk, said “the path we are currently on leads to disaster”, if political leaders don’t adapt to the speed of AI development.
In an essay titled Another Future is Possible, Hunter-Torricke wrote: “It’s crystal clear to me now: there is no plan.”
He left Deepmind in October and has since joined the Treasury as a non-executive board member
He is also launching a London-based non-profit, the Center for Tomorrow, which he says will not take funding from big tech firms.
The bulk warning centres around the effect of AI on jobs. The communications exec cited International Monetary Fund estimates showing that around 60 per cent of roles in advanced economies are exposed to AI disruption, arguing that the true impact may be greater given how quickly systems are improving.
“The productivity gains will be real”, he wrote. “But there is no automatic mechanism that translates them into broadly shared prosperity.”
He also said the likely outcome, without intervention, is a surge in corporate profits as labour costs fall, combined with a shrinking share of income for workers.
Hunter-Torricke described a possible future in which a small, highly skilled elite benefit from AI-enhanced capabilities ans advances, while much of the population faces weaker economic prospects.
“By mid-century, on this trajectory, we arrive at something that goes beyond inequality,” he wrote, adding that he did not make the prediction lightly.
Growing warnings from inside the industry
His warning adds to a flurry of concerns from within the AI sector itself. An AI safety researcher at AI giant Anthropic resigned last week, warning in a public letter that “the world is in peril” from the technology.
A former OpenAI employee also stepped down, raising concerns about the firm’s direction of deployment.
Dario Amodei, chief executive of Anthropic, also recently published a 19,000 word essay aruging that humanity is entering a period that will “test who we are as a species”.
He said highly capable AI systems, potentially exceeding human expertise, could emerge within a few years.
While optimistic that risks that such risks can be managed, he warned that the economic benefits of AI could make it difficult to slow progress, desptie concerns rising.
The warning echoes concerns raised by academics like Michael Woolridge, professor of AI at Oxford University, who recently said that inadequate safety testing could risk a ‘Hindenburg-style’ moment for the industry if a major failure undermines public trust.
And at the same time, political leaders are urging countries not to fall behind. George Osborne, the former chancellor who now leads OpenAI’s ‘for countries’ programme, told leaders at a summit in Delhi that national face ‘fomo’ over AI.
He said counries that fail to adopt the tech risk becoming “a weaker nation, a poorer nation”.
Hunter-Torricke said that the next ten years are critical and has proposed stronger support for people whose jobs are displaced by automation, taxation of AI corporate gains, and international cooperation to share its economic benefits more broadly.
He has also floated ideas such as a universal basic income and large-scale cross-border investment akin to a modern ‘marshall plan’.
After 15 years in Silicon Valley, he said he felt compelled to speak publicly. “What I had seen in those rooms, over those years, now made it impossible to stay,” he told The Times, adding that in the past he had “only told half the story”.