AI governance for boards: A short practical guide
The boards hoping AI won’t change their world are already failing their fiduciary duty. So here’s what they need to do, according to Lewis Liu
When I was building Eigen, my previous AI company, I used to roll my eyes (privately) whenever someone asked me about AI bias. Eigen digitised complex financial and legal documents using AI trained on those documents themselves. Explicit input, explicit output, no room for political, gender or racial bias to creep in. Simple.
Those days are gone.
AI bias is now just one of the minefields a board must navigate as it scales up adoption. And the question I get asked most often by investors, board directors and politicians is the same one: how do we actually hold companies accountable for how they use AI?
It’s a loaded question. Regulation varies enormously by use case and jurisdiction. But here is my framework: survival, data and decisions.
Survival
The first fiduciary duty of any board or executive team is brutally simple: can my business survive the AI revolution?
In the US, AI is rapidly commoditising white-collar work, from law and accounting to marketing and software. Many software companies have already lost close to half their market capitalisation on the fear alone. The rule of thumb I give boards is this: if your core differentiator is processing strings of text (words, code, contracts), AI will have a massive impact on your business model. This is because LLMs are, at their foundation, word-token machines. Law firms, software companies, marketing agencies, call centres: all squarely in the blast zone.
On the other end of the spectrum, businesses differentiated purely by physical, person-to-person interaction face fewer (though not zero) disruptive routes from AI.
One caveat: this is a Western lens. Factor in China’s growing dominance in physical AI, robotics, logistics, manufacturing, and even that assumption starts to look shaky. Boards need to be stress-testing both vectors.
Data
The second part of the framework is data. AI needs data and context to perform well, and how that data is leveraged, shared and protected is something every board needs to understand, not just the CTO.
Start with your assets. If you are a manufacturer, your most valuable data sits in your process databases. If you are a financial institution, it’s your transactions and deal decisions. If you are a law firm or consulting firm, it’s almost certainly the inboxes of your senior partners. Understanding what you have and how to feed it into your AI transformation is the first step.
But nakedly sharing private data with AI agents running around your organisation is dangerous. As agents get more complex, how they store and use information becomes increasingly opaque. Most current AI agents lack adequate privacy protocols, routinely falling foul of GDPR or the California Consumer Privacy Act. The governance question, what gets shared with AI systems, under what conditions, and with what controls, is not an IT problem. It is a board-level problem.
The good news is that a new category of tooling is emerging precisely to solve this: data governance layers that sit between your private data and your AI agents, controlling what gets shared, with whom, and under what permissions. This is still nascent, but boards should be demanding to know whether their organisations have thought about it.
Decisions
The third and most complex part of the framework is decisions: specifically, how AI is making them inside your organisation, and whether anyone is actually accountable for the outcomes.
Three distinct problems need to be on every board’s radar.
The first is pre-existing model bias in decision making. This isn’t about what data your enterprise feeds in; it’s baked into the foundation models themselves from their original internet-scale training. Bloomberg published a study showing GPT consistently ranking equally-qualified candidates unequally across demographic groups, with Hispanic and Asian women ranked most favourably and Asian and white men ranked least favourably for the same HR role. This isn’t a bug. It’s how these models work. And this kind of decision bias arrives in your organisation the moment you deploy them, before a single line of your own data touches the system.
The second related topic is semantic leakage that runs deep. Strip out all identifiable demographic data before inputting a file and the LLM will still reconstruct proxies from vocabulary, geography and names. When I fed the profiles of my co-founder Huiting and then tested with my own Chinese name, Ziruo, into even the latest Claude Opus model, without specifying gender in either case, both were defaulted to “she” and the model subtly adjusted its assessment of our seniority and technical credibility accordingly. No demographic data was provided. The bias was introduced at the model level, invisibly. Now imagine that happening inside your insurance claims or credit approval process at scale.
The third is downstream feedback loops. If AI outputs feed back into your knowledge base (and increasingly they do), biased outputs compound across cycles. I’ve written about this as knowledge collapse; it is one of the least discussed but most dangerous long-term risks in enterprise AI.
These are by no means the only problems lurking in AI-assisted decisions, but they are the ones I’ve observed most directly. A few guardrails worth implementing now. Use the right tool: not everything needs an LLM, so keep arithmetic and rules-based decisions deterministic. Avoid using LLMs to make decisions where identity information is involved. Structure your outputs: “does this contain exclusion trigger X, yes or no” is auditable; “should we pay this claim” is not. And mandate human sign-off on anything material. The AI analyses. The human decides. That is both the legally defensible and ethically correct position.
Regulations will evolve. Shareholder expectations will increase. Consumer trust, once lost, is almost impossible to rebuild. The boards hoping AI won’t change their world are already failing their fiduciary duty. Those waiting for regulation to catch up before acting aren’t being prudent; they’re being negligent in their duty of care. Start building the muscle now.