OpenAI’s security crackdown signals new AI battle
Silicon Valley’s $300bn crown jewel, OpenAI, is shifting focus from building artificial intelligence to protecting it.
OpenAI, the maker of ChatGPT and arguably the most scrutinised name in AI, has imposed a sweeping internal lockdown.
From biometric fingerprint scans, to offline ‘tented’ R&D zones, the San Fransisco-based firm is tightening security across the board amid growing fears of foreign espionage and tech theft.
It has been reported that the clampdown has been months in the making but that things escalated after Chinese AI startup DeepSeek released a rival model in January.
OpenAI claimed the Chinese firm used “distillation” techniques to mimic its models, a charge DeepSeek is yet to address publicly.
Silicon Valley’s new arms race
The move comes amid heightened competition in the global AI sector, as governments, as well as Big Techs, prioritise securing access to cutting-edge technology.
Sam Altman’s firm has been “aggressively expanding” its security operations. That includes hiring Dane Stuckey, former cyber chief at Palantir, and bringing retired US Army general Paul Nakasone onto its board.
Internally, OpenAI’s approach is now defined by what’s being called ‘information tenting’, a highly compartmentalised system in which even senior staff are walled off from one another depending on the project.
Only those formally ‘read in’ to a project can access or discuss it, a system that, while arguably effective, has frustrated some teams and slowed collaboration.
Sensitive projects – including last year’s ‘o1’ model, codenamed ‘Strawberry’- are now developed in isolated, offline environments.
Espionage, paranoia, or both?
Some will say OpenAI’s instincts are justified. The US government has warned repeatedly that foreign adversaries, including China, are ramping up efforts to access America’s most sensitive intellectual property.
And when models like GPT4 or its successor could underpin everything from autonomous weapons to economic modelling, the stakes are high.
But there’s a fine line between vigilance and paranoia. The firm’s security shift comes amid rising concerns about xenophobia in Big Tech, particularly in high trust roles.
A large proportion of Silicon Valley’s technical talent is of Asian descent, something that’s prompted awkward conversations in boardrooms as fears of espionage rise.
There’s also a practical cost. Highly siloed systems can stifle innovation – the very currency AI labs trade on. The more locked down a model is the fewer minds that can work on improving it.
For a company like OpenAI, which pitches itself as an industry leader, collaboration has long been its key selling point.
A market watching closely
From the City’s perspective, the OpenAI clampdown is just as much about protecting shareholder value as it is about guarding state secrets.
With a private-market valuation nearing $300bn (£232bn) and Microsoft’s backing in hand, the stakes have never been higher. If an IP leak were to devalue OpenAI’s moat, its model weights, architecture, or training data, it wouldn’t just be a cybersecurity failure. It would be a business disaster for member of the so-called Magnificent Seven.
OpenAI, for its part, insists this isn’t a response to any specific breach. “We’re investing heavily in our security and privacy programs as we seek to lead the industry,” a spokesperson told the Financial Times.
Still, investors and rivals alike will be reading between the lines. With whispers of internal leaks and increased friction among staff, it’s clear the firm isn’t just fending off foreign adversaries; it may be grappling with internal growing pains, too.