Can AI replace the boardroom?

AI is quietly taking over strategic decisions and that’s a mistake. If strategy is going to stay human, then it has to stay messy, says Paul Armstrong
Automation used to mean invoices, scheduling and sorting PDFs. Now it means suggesting layoffs, flagging underperforming units, reshaping go-to-market strategy, and proposing M&A targets. AI isn’t just in the back office anymore, it’s creeping into boardrooms around the world, sometimes quietly, sometimes invisibly, starting to do the one thing it wasn’t supposed to: think.
Generative AI and large language models (LLMs) have moved beyond tactical support and are being trained on financials, market signals, competitive intelligence, ESG performance and internal strategy decks. Tools like Salesforce Einstein, Palantir Foundry, and Microsoft Copilot are already being embedded into executive workflows, surfacing recommendations that look less like analytics and more like direction.
McKinsey’s internal AI platform Lilli was one early case. Consultants use it to pull insights from tens of thousands of case studies, internal documents and industry data points. It doesn’t just summarise; it proposes answers. In client sessions, Lilli can now offer strategy recommendations, reducing the time required for discovery phases and even influencing the scope of consulting engagements.
At everyone’s favourite cardboard abuser, Amazon, AI plays a role in shaping logistics and infrastructure investment decisions. Palantir has claimed its models helped advise UK health officials on vaccine distribution strategy. JPMorgan is reportedly experimenting with AI tools to scan analyst calls, forecast market sentiment, and model risk exposure. None of this is theoretical. AI is already advising the people who advise.
A chatbot CEO?
No one’s handing over the CEO job to a chatbot anytime soon, but the edges, however, are eroding. As AI’s predictive capabilities improve, it’s becoming easier to ask whether the board needs to be replaced at all, or simply reduced. Strategic planning is increasingly being fed by live data and model-generated forecasts. That changes the psychology of leadership. When the system says, “based on all available information, this is the optimal move,” disagreeing with it starts to feel risky. If a human executive backs their own intuition and it goes badly, that’s a career-ending miss. If they follow the model and it goes badly, it’s easier to say the data was flawed.
The shift from decision-making to decision-validation is subtle but powerful. Boardrooms don’t need to be replaced wholesale, they just need to be nudged into over-relying on AI outputs. Over time, strategy starts sounding more like compliance: rational, defensible, bland. Bias doesn’t disappear, it tends to move upstream. The datasets these models train on often reflect historical decisions, many of which were shaped by the same biases and issues companies claim to be avoiding. AI might suggest promoting a “high performer” without acknowledging how biased performance metrics were in the first place. The output may be cutting a region or product line because historical data undervalued long-term growth or failed to measure brand equity. In short, the algorithms need oversight, now and likely in the future too, but perhaps not as long as we might think, or be comfortable admitting.
Trust in AI is rising, but understanding? Less so. Many executives now operate with tools that offer insights they cannot fully interrogate. Ask how the model got to its conclusion, and often the response is wrapped in technical abstraction. If the tool gives a compelling recommendation, it’s tempting to nod along. If it’s presented in a boardroom deck, complete with visualisations, benchmarks, and NLP-generated summaries, it becomes even harder to push back.
Accountability gets murky. If the board makes a bad call, who owns it? The chair? The CFO who signed off? The platform that synthesised the risk analysis? As models start shaping real outcomes, the chain of command starts to tangle. Regulated industries will feel this pressure first. Enterprise software companies are already pitching AI as a strategic partner, not just a dashboard. Oracle’s AI modules propose cost-saving initiatives. SAP’s systems recommend supply chain reconfigurations. These aren’t insights for your team. They’re decisions presented as options, increasingly frictionless to implement. The human oversight that was supposed to be the buffer is quietly being skipped in favour of speed.
How to respond
Boards need to respond with clarity. Visibility is the first step. Executives must know where AI-generated recommendations are being used, how often, and in what domains. Lines must be drawn between insight, advice and action. Any recommendation made by a model should come with a human-in-the-loop checkpoint. Not to slow things down, but to ensure accountability and maintain strategic diversity.
Redundancy is the next move. If everyone uses the same models trained on the same public data, strategy becomes commoditised. Competitive edge shifts to those who fine-tune models with proprietary insight, nuance and local context. Companies that treat AI as a strategic intern – fast, smart, tireless but needing oversight – will outperform those that treat it as a board replacement. Culture has to shift too. Executives need to develop AI literacy, not to become engineers, but to understand what a system can and can’t do, and where its blind spots might be. Knowing when to challenge the model becomes as important as knowing when to trust it.
No one really wants to admit they’re handing over strategic thinking to software. Many already are, slowly, incrementally, unintentionally. The slide from insight to action is smooth, especially when the interface is polished, the recommendation is confident, and the human brain is tired.
What used to be strategy is starting to look a lot like optimisation. The longer AI stays in the room, the more human decisions start sounding like they were made by committee – and the committee is a model trained on last year’s data. Are the next major company moves going to be governed by maverick management moves, or an LLM designed to play to the middle? Who decides the level of risk the model gets to use?
Boards don’t need to be replaced to become irrelevant. Following the script is enough. Strategy, if it’s going to stay human, needs to stay messy. Otherwise, it becomes another product; polished, logical, risk-managed and instantly forgettable.
Paul Armstrong is founder of TBD Group (including TBD+), and author of Disruptive Technologies