Should we be worried that AI’s referees are leaving the pitch?
Zoe Hitzig had one of the more unorthodox jobs Silicon Valley has to offer. She worked on the ethics and policy questions around artificial intelligence at OpenAI – the uncomfortable bits about how these systems are built, used, accessed and paid for.
A couple of weeks ago, she quit. Taking the pen for the New York Times, Hitzig explained why: the company is experimenting with advertising inside its open model bot, ChatGPT.
On paper the idea is simple enough, and doesn’t seem like cause for concern. Running systems like that costs monstrous sums, and advertising has funded much of the modern internet.
But Hitzig’s concern lies elsewhere. “For several years, ChatGPT users have generated an archive of human candour that has no precedent”, she wrote. “People confide things to their bots they might never have typed into a search bar – anxieties on health, relationships, mental health, work, money… the list goes on.”
Add advertising to that environment and the questions multiply. OpenAI has publicly pledged that any ads would be clearly labelled as such, and that advertisers would by no means have access to anyone’s private ‘conversations’.
Still, these systems are becoming commercial infrastructure. Far from the research projects they originally set out as, the pressure to monetise them is reaching a new fever pitch.
AI safety researchers sound the alarm
Hitzig’s exit isn’t an anomoly. In recent months, several safety researchers have also walked away from the very labs that created the technology.
Mrinank Sharma, who led safeguards research at Anthropic, recently announced he was leaving after years focused on the risks presented by increasingly capable AI models.
In a note explaining his departure, Sharma revealed he had “repeatedly seen how hard it is to truly let our values govern our actions”.
Other high profile departures have followed suit, not dissimilar to a canary in a coal mine.
Over the past year, senior figures working on alignment and safety have left the juggernaut suite; the likes of OpenAI, Google Deepmind and Elon Musk’s xAI.
Ethics has not always been directly cited as the sole reason pushing people out. But several researchers have hinted at disagreements over how quickly firms should deploy new models.
The pressure to build, build, build
Since the release of ChatGPT in late 2022, the world’s largest tech giants have rallied behind an industry running before it learned how to crawl, let alone walk.
The likes of Microsoft, Alphabet and Amazon have raced against one another to integrate AI into search, software, cloud and consumer products.
Meanwhile, a new generation of rapidly-growing start-ups, from Anthropic to OpenAI, entered the ring, competing with increasingly powerful models of their own.
The funding lanscape behind these models has been brutal. Training and operating large models requires huge data centres packed with expensive, energy hungry chips, often costing billions of dollars per system.
In turn, that has amped the pressure to turn experimental tools into profitable, money-making machines.
OpenAI is exploring enterprise products and advertising as potential revenue streams.
Others are selling access to models through various cloud platforms, or licensing their systems to firms building AI-powered software.
The people hired to speculate and manage the long-term risks of these bots, have therefore often found themselves working in organisations running at breakneck speed.
The ethics question
Safety researchers have to worry about everything; misinformation, bias, copyright, AI fraud, cybercrime.
Some fear that the increasingly capable systems could also be used to automate decisions in more sensitive sectors like finance or healthcare, without sufficient oversight.
Bots have also garnered critiques about the data used to train them. Many creatives, for example, say their work has been scraped unlawfully from the internet and subsequently used to train AI systems that now compete with them.
In other cases, concerns have circled around transparency, or rather lack thereof. Modern AI models are notoriously opaque, making it hard even for their own creators to explain the how and why behind particular answers.
Firms say they’re investing heavily in safety research and governance to combat that. Anthropic, for example, has created teams focused on ‘constitutional AI’, an approach designed to guide how its bot Claude responds to sensitive questions.
But critics argue the industry’s commercial incentives could clash with those public goals.
UK AI governance
The UK has worn its safety badge proudly when it comes to AI development. Home to the UK AI safety institute, the government claims to have long studied the risks posed by AI systems to help shape global standards.
At the same time, the government is trying to attract investment and talent into the sector, hoping AI can drive economic growth.
But that balancing act is only becoming harder as the tech advances; Investors are pouring capital into AI startups by the bucket, while regulators are scrambling to understand systems that evolve too rapidly too keep up.
This frenzy puts huge pressure on the roles like the one Zoe Hitzig once held, which was created specifically to ask difficult questions about technologies booming at dizzying speed.
Shouldn’t we be worried that some of them are already leaving the room?