Meta’s Llama: the strategic play behind the promise to democratise AI
Meta has long promised to democratise AI, with Mark Zuckerberg frequently doubling down on his vision to make it “open and accessible” so that “everyone in the world benefits”.
Yet instead, it’s been executing one of the most strategic power plays in Silicon Valley – wrapping its race for AI supremacy in the language of openness while laying the groundwork for monetisation and market dominance.
Llama, Meta’s much-touted open-source large language model (LLM), was never really about giving AI to the people.
Many felt it was instead about buying time, attracting top tier talent, and leapfrogging rivals like OpenAI and Google.
With $65bn earmarked for AI infrastructure this year alone, Zuckerberg’s open-source experiment is now shifting into its next phase, as the focus turns to generating returns.
From ‘free for all’ to ‘fee for all’
Zuckerberg has always played the long game, having waited nearly a decade before monetising Whatsapp.
Now, after a multi-year AI spending spree, Wall Street is getting restless, and signs are pointing to the end of the open-source honeymoon.
Llama, Meta’s flagship family of AI models, was initially released under a restrictive open license that allowed tinkering – but not for building competing services.
Meta branded it as “open-source”, but it fell short of the true technical definition. It was more marketing than movement.
“Open source” gave the Big Tech behemoth access to a free R&D army: developers across the globe refining Llama’s performance, contributing to fine-tuning, and expanding its use cases – all without Meta footing the bill.
Now, with Llama 4 outperforming peers in benchmarks and its new model in the pipeline, Meta has what it needs to commercialise.
Zuckerberg hinted as much in an interview last year, saying on the Dwarkesh Patel podcast that if it became irresponsible to give AI away in the future, “then we won’t”.
Talent magnetism
Llama’s open-source allure was a recruitment magnet, and it’s worked. Meta has poached engineers from OpenAI, Deepmind, Anthropic, and most recently, Apple.
Its latest coup was Rooming Pang, Apple’s head of AI models, who was reportedly signed in a deal worth tens of millions per year.
Pang led the team behind Apple Intelligence and the upcoming Siri overhaul. His exit was seen as a blow to Apple’s on-device AI ambitions – and a strategic win for Meta.
Meta has also onboarded OpenAI’s Yuanzhi Li and Anthropic’s Anton Bakhtin in the last couple of weeks.
It’s part of a high-stakes play to staff its ‘superintelligence lab’ – a stealth unit tasked with building AI that can outperform humans at nearly every task.
Zuckerberg has been personally pitching roles to candidates, reportedly offering up to $100m signing bonuses.
As Mirian Bruce of Mayer Brown noted: “With Meta reportedly offering $100m signing bonuses in a bid to attract OpenAI staff to jump ship, tech companies in both the UK and the US are beginning to wake up to a new AI challenge”.
Meta’s hiring spree is also part of a broader trend, where companies absorb key teams and tech from startups, without buying the company outright.
Amazon’s acquisition of Adept’s leadership team and datasets is a textbook case. Microsoft pulled a similar move with Inflection AI too, absorbing co-founder Mustafa Suleyman and key engineers.
Regulators are taking notice, with US senator Ron Wyden calling for closer scrutiny of these tactics, citing concerns over market concentration.
“A few companies control a major portion of the market… trying to buy out everybody else’s talent”, he said.
Infrastructure and Scale AI
Underpinning Meta’s strategy is a huge investment in infrastructure.
Its $14.3bn deal with Scale Ai has supercharged its ability to label, refine and train LLMs at scale.
With over 1.5 million data annotators and proprietary tooling, Scale offers Meta a critical edge over competitors still reliant on third-party datasets.
And Meta isn’t stopping there. Its in-house training and inference accelerator (MTIA) chips, designed specifically for AI training, are designed to drive down AI compute costs.
The upcoming ‘llama 4 behemoth’ already rivals GPT-4 in performance, but costs Meta significantly less to train thanks to co-distillation techniques – a way of using smaller models to bootstrap larger ones.
Reputation and regulation
By releasing Llama as “open source”, Meta didn’t just gain developers and engineers – it also gained public approval.
It positioned itself almost as an anti-OpenAI alternative: transparent, democratic, and developer-first.
But critics have long argued the strategy was “open source washing”. Llama’s license was restrictive, its benchmarks allegedly manipulated using unpublished variants, and its distribution tightly controlled.
Still, the optics worked until now. With Meta hinting that future models won’t be open source due to safety concerns around ‘superintelligence’, it’s clear the facade is slipping away.
It seems that Meta’s AI strategy was never about competing, but was rather about shaping the rules of the game.
Now, with Wall Street watching closely and rivals like OpenAI locking down their IP, Zuckerberg’s move toward commercialisation is inevitable.
Meta says it wants to make AI “for everyone”, but in practice, it has used openness as a Trojan horse – a clever entry strategy used not to democratise AI, but to dominate it.
Meta has been approached for comment.