Privacy doesn’t matter in Silicon Valley – that’s why AI is failing
Data privacy is as an essential component of solving “context” — the key to unlocking full-scale and safe AI adoption for both businesses and consumers, says Lewis Liu
I was hanging out with a bunch of 20-something “in the flow” Bay Area founders last week when the conversation turned to Openclaw and Moltbook.
If you haven’t been following: Openclaw is an AI agent with full autonomy over your computer, systems and files. Moltbook is a social network built exclusively for AI agents. These AIs supposedly created their own religion, porn site, philosophical manifestos and as of this week, started threatening humans. Much of this turned out to be performative theatre, but what remained captured the imagination: AI that can act rather than just chat, and AI that can coordinate and share intelligence at scale.
Hanging out with these founders, I raised the point that whilst Openclaw and Moltbook are inspiring and whilst I am building similar systems for businesses, security and privacy remain huge unsolved challenges. One of them cut me off: “Lewis, you’re so old school! Privacy doesn’t matter in Silicon Valley!”
I thought about this for a few days. It stuck with me. Then it hit me: this is the gap everyone’s talking about, the one behind MIT’s headline-grabbing finding that 95 per cent of AI projects fail.
See, I spend my time immersed in two camps. One is the “move even faster and break even more things” crowd, the accelerate-AI-by-any-means-necessary types. My co-founder and CTO works with our core engineering team out of his garage in San Mateo, California, no kidding. The other camp is central bank governors, law firm managing partners, bank CEOs and hedge fund portfolio managers across London, New York, Basel and Singapore.
The massive software sell-off over the last few months, with the S&P Software & Services Index down roughly 30 per cent since late October, is driven by one assumption: AI systems will fundamentally take over the software ecosystem. The latest automated coding agents like Claude Code, OpenAI Codex, or their immediate successors will commoditize traditional enterprise software.
Today’s autonomous AI agents (Openclaw, Claude, and others) work by ingesting massive amounts of unfettered context to execute end-to-end workflows. Context here means social context (who sent the email, who approved this), temporal context (the timeline of tasks and workstreams), and detailed informational context (write-ups, presentations, emails). About half the time, if you give these systems unfettered access, something magical happens. You feel like you’re staring into a glimpse of the Singularity.
The other half of the time, you get utter garbage.
The mundane reality is that LLMs aren’t stable enough, even with existing frameworks, to predictably complete complex end-to-end workflows. But let’s assume we solve that, which I think we will in the coming quarters or years. With this, an even more mundane but deeper problem emerges: privacy.
Solving for context
In my conversations with the second camp (the “old school sectors”), the response is unanimous: “Hell no” to giving AI agents unfettered access to personal or corporate context. “What about my clients who explicitly forbade me from running their docs through an LLM?” “What about my conversations with HR about an underperforming associate?” “What about compliance walls between departments or competing clients?” The list goes on.
To solve full-scale AI adoption, whether for enterprise or consumers, you need to solve context, which I’ve written about before. But the realization is that in order to solve context, you need to solve privacy. Privacy and context are two sides of the same coin.
Sadly, this is not the focus of the “move even faster, break even more things” ethos right now. Not a week goes by without someone from the second camp, institutional leaders, real economy executives, telling me a horror story. Microsoft Copilot inadvertently revealing a CEO’s bonus plans at an accounting firm. An enterprise search agent product from a Silicon Valley unicorn exposing embarrassing company gossip and confidential pricing intelligence at a financial services firm. Both real stories.
Individuals and organizations need to control what goes into an AI system, just like they control which human being gets what information and how. This is hard because controlling how your context gets shared depends on hard rules (e.g., ethical walls for law firms, compliance checks for banks), soft rules (e.g., sharing project concepts but anonymized for consulting firms), and pure social context (e.g., what I share with my boss versus my direct report versus my political rival).
It’s a hard problem, but not insurmountable, especially leverage AI itself. Solving privacy for context will unlock far larger-scale AI adoption. Foundation Capital calls context the next trillion-dollar opportunity. Perhaps, but only if there are some “old schoolers” to build what is necessary around privacy.
As for me, I already have a digital AI twin of myself prompting my AI coding agents to build my new product. I’m excited for the future, but only if it’s accessible and safe for all. And for that to happen, we need to move fast to fix a few things first.