Hey Siri: Can ChatGPT save Apple’s AI woes?

After years of insisting it could do AI its own way, Apple is now reconsidering that approach – and the implications could reshape its core product experience.
The iPhone maker is in talks with Anthropic and OpenAI about powering a revamped Siri, reported Bloomberg, potentially swapping out its own foundation models for one of their large language models (LLMs).
If it proceeds, this would mark a sharp pivot away from Apple’s tightly held, in-house strategy – and a clear admission that, in the generative AI race, it’s not leading.
AI shift that’s hard to overstate
Apple’s internal models, overseen by its foundation models team, have been powering early iterations of “Apple intelligence”, the company’s answer to ChatGPT-style assistants.
Siri itself, however, remains a laggard in the eyes of both consumers and Apple’s own executives.
Despite plans to release a next-gen Siri powered by its own models in 2026, the company is now openly testing alternatives.
The deliberations began under the watch of Mike Rockwell, who took over Siri engineering in March after leading the ‘vision pro’ project.
Alongside Craig Federighi, Apple’s head of software engineering, Rockwell reportedly launched a comprehensive review of how Siri performed with various models – including those from OpenAI, Google, and Anthropic.
That triggered a series of exploratory talks, led by Apple’s head of corporate development, Adrian Perica.
Apple asked both OpenAI and Anthropic to customise versions of their models to run on Apple’s private cloud compute infrastructure – secure servers running on its own chips – to preserve privacy and performance.
Why Apple’s AI strategy is faltering
The backdrop to all this is increasing instability within Apple’s AI ranks.
John Giannandrea, brought in from Google in 2018 to lead Apple’s AI efforts, has steadily seen his portfolio shrink.
The Siri project has been taken out of his hands, and several major engineering groups – including those responsible for Core ML and App Intents – have been reassigned to Federighi’s domain.
Meanwhile, internal morale is under pressure.
A key departure came this month when Tom Gunter, one of Apple’s most senior LLM researchers, exited after eight years.
Others have reportedly fielded offers from Meta and OpenAI for multimillion-dollar packages – some in the $10m to $40m annual range.
Apple, on the other hand, often pays just a fraction of that.
Bloomberg reported that the team behind MLX, a key framework for running AI models on Apple silicon, also nearly left en masse before receiving last-minute retention offers.
The fear among some inside Apple is that shifting to external models for Siri now could signal a wider move away from internally developed models altogether – making their work seem expendable.
What’s driving the change?
By most accounts, Apple’s internal models haven’t yet hit the bar.
A major Siri update was quietly delayed earlier this year, and at WWDC 2025, Apple executives admitted certain features simply “didn’t meet our quality standard.”
The company is still planning to ship an enhanced Siri next spring, but its own models won’t be ready to support the kind of contextual intelligence and app control Apple has been promising.
Third-party partnerships could accelerate that timeline. And they wouldn’t be without precedent: Samsung’s Galaxy AI, for instance, uses Google’s Gemini for many of its AI features, despite being branded as a Samsung experience.
Amazon also relies on Claude to help power the new Alexa+.
However, rhat doesn’t mean Apple is abandoning in-house development altogether.
The foundation models team is still working on tools like on-device summarisation, Genmoji creation, and the developer-facing models that will roll out later this year.
But for Siri – the user-facing center-piece of its AI strategy- Apple appears increasingly open to outsourcing.
A costly, but strategic partnership
A deal with Anthropic or OpenAI would be a costly one. Bloomberg reported that Anthropic is seeking a multibillion-dollar annual listing fee, with a steep year-on-year increase.
That kind of pricing would make even Apple wince, especially given its focus on margins.
But, the company knows it risks falling further behind if it doesn’t act fast.
A key consideration to keep in mind, is that Apple wants any external model to run on its own cloud infrastructure, to maintain control over data privacy.
That means Anthropic or OpenAI would need to customise their models – a technically feasible, but non-trivial task.
For branding, Apple’s approach to privacy and user trust has long been a differentiator.
Even if it can sandbox external models within its cloud, users may question the idea that Siri is “powered by OpenAI”.
Apple will likely seek to downplay or hide any visible third-party attribution – just as Samsung minimises Google’s presence in Galaxy AI.
Long term outlook
Ultimately, Apple seems to be pursuing a hybrid strategy. In the near term, a licensing deal could bridge the gap while its own models mature.
In the long term, many still believe that Apple wants to retain full ownership of the intelligence inside its product offerings.
That includes new categories reportedly in development, like AR glasses, that will rely heavily on advanced AI.
For those, Apple will unlikely settle for a dependency on external providers.
Still, the Siri deliberations mark a big shift. Apple has rarely shown public vulnerability in its strategic capabilities, but by evaluating external AI partners, it’s acknowledging that speed and quality are more important than doing it alone.