Most people have no idea how good AI is – and they’ll be the first to go
A viral post from an AI investor warns most people have no idea how good AI is, or how quickly it will come for their jobs. Will it come for mine? Maybe.
A couple of weeks ago I looked at the essay written by Anthropic’s co-founder in which he expressed his fears that humanity won’t be able to handle the coming wave of powerful AI. Dario Amodei said that humanity needs to wake up – and his concerns and observations were deeply philosophical and indeed hypothetical.
It was a compelling essay precisely because it presented future scenarios and forced us to confront major changes to our economy, our democracies and our way of life.
Yesterday, another missive by an AI insider went viral and this time the warnings were more immediate, more urgent.
Matt Shumer set out his own experience as an AI and software engineer, noting – with some alarm – that AI is now better at his own job than he is. He has become superfluous. He said the AI labs – OpenAI, Anthropic – made a deliberate choice – they “focused on making AI great at writing code first… because building AI requires a lot of code.”
He explained that “If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That’s why they did it first.”
Everyone’s about to feel like a displaced coder
And now, Shumer says, “the experience that tech workers have had over the past year, of watching AI go from ‘helpful tool’ to ‘it does my job better than I do’, is the experience everyone else is about to have.”
Shumer points to the 5th of February as the day when the scales fell from his eyes. Why? Because on that day OpenAI released GPT-5.3 Codex, the latest and powerful version. OpenAI said:
“GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”
In other words, the AI built itself. He calls this the “intelligence explosion” – something Anthropic’s co-founder has also been candid about; each new iteration of AI will help to construct the next, better version until the improvements are basically autonomous. That’s the theory, anyway.
So what does this have to do with jobs? And which jobs?
Well, buckle up, because Shumer says based on what he’s seen and is seeing, “If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it.”
Legal services, data management, writing…
He cites a non exhaustive list of roles and sectors that he says are about to be ripped apart; legal services, financial analysis, software engineering, data management, customer service, medical analysis, marketing, writing and journalism.
But he isn’t saying that anyone working in these roles is about to be replaced by an AI agent; that’s a blunt proposition that misses the nuance, the variables, the dynamics of employment and the wider labour market. What he’s saying is that if you’re in these roles and you’re not using AI to its full potential then you are asking for trouble.
He offers some practical advice which is actually incredibly useful; start using AI right now. And really use it, don’t just use the free version of Chat GPT as an alternative to Google. That’s just dipping your toes in it, he says you need to dive in.
Most people have no idea what’s happening
Pay for it – $20 a month in the case of Chat GPT – and really experiment with it. Set it complex tasks. Learn how to use it. Read about it. Engage with what’s happening. Use different platforms and models. Feed it legal contracts, datasets, company data, business plans, interrogate it. See how good it is and then ask yourself what you might get for the $200 a month service.
If you’re watching this and you’re thinking “Jesus Christian keep up, I’ve been doing that for months, I’ve been vibe coding, I’ve been deep in Claude” then fair enough, but I belong to the group of people who right now only scratch the surface of AI’s capabilities – and that is most people.
Most people do not understand how good AI is right now. Shumer sets it out:
“In 2022, AI couldn’t do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54. By 2023, it could pass professional exams. By 2024, it could write working software and explain graduate-level science. By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI. On February 5th, 2026, new models arrived that made everything before them feel like a different era. If you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.”
The most intense disruption we’ve ever seen
That’s why the share prices of major listed companies in sectors such as insurance, wealth management, data services, have been so jumpy in recent weeks; new tools and products are emerging on a daily basis that pose fundamental questions for incumbents. We are, without a doubt, entering the most intense period of creative disruption that modern capitalism has ever experienced.
I think that’s exhilarating and I also think it’s unsettling. But pretending it isn’t happening is not an option.
Shumer’s advice is do not hide or conceal your use of AI. He says “The person who walks into a meeting and says ‘I used AI to do this analysis in an hour instead of three days’ is going to be the most valuable person in the room. Whereas the person who still spends three days on it when it could be done in an hour is vulnerable. The people who will struggle most are the ones who refuse to engage.
So should I retrain as a plumber?
So, what about me – a journalist, an editor. Should I retrain as a plumber? Shumer acknowledges that “a lot of people find comfort in the idea that certain things are safe” – we think, “sure AI is helpful but it won’t replace my creativity or my judgment.” A lot of people say in response: yes it will.
I’m not so sure. Shumer would look at me and say “Christian’s woefully naive” – but when I think about what my job entails – what journalism today entails – I still see the value in my own physical presence; at events, on the radio, in the newsroom – and while my sector will be challenged and empowered by AI in equal measure, just as yours will be – I think that humanity, trust, credibility and experience are all going to surge in value precisely because of the surge in AI’s capabilities.
Am I right or am I wrong? Think on it.