Should you build an AI clone of your CEO? No!
As Meta launches an AI avatar of Mark Zuckerberg, Paul Armstrong writes why cloning your boss means cloning mistakes
Mark Zuckerberg is building an AI version of himself so employees can interact with “him” at scale. What a delight for everyone. The pitch sounds efficient; tens of thousands of staff, direct access to leadership voice, fewer bottlenecks. A neat solution to the pesky problem of one executive not being able to give personal time to 50,000 staff. The only trouble is that this isn’t leadership at scale; it’s more like judgement being handed to a system that can’t be held responsible for what it says or does.
Judgement is being outsourced, not scaled
Meta’s internal clone of Mark Zuckerberg is trained on his tone, views and (god help us) past decisions, designed to answer questions and guide employees without requiring his time. If that sounds dystopian, it gets worse. Framed one way, that looks like access. Framed properly, that looks like thousands of new decision points being created without any corresponding increase in accountability.
Claude, ChatGPT and co generate outputs based on probability, not understanding just like corporate versions. Confident answers can still be wrong, fabricated or inconsistent depending on prompt and context, a limitation explored in research on hallucinations in large language models.
Vendors often acknowledge this behaviour as an unresolved constraint rather than a solved problem, and a whole new industry of GEO (generative engine optimisation) experts are trying to sell companies expertise that they can influence the black boxes, when really everyone is just scrabbling for Altman and co to throw them a sign they’re not going to be clobbered next.
Businesses are already deploying clones and proxy systems far beyond executive chat. Hiring teams are already using awful AI avatar interviewers to screen candidates. HR departments are generating performance reviews and internal feedback using models trained on historical data.
Cloning expertise means cloning mistakes too
Customer service bots negotiate refunds, explain policies and make commitments that bind the company. Each of those systems claims to remove workload, but really these systems just replace human judgement with output that meet the acceptable levels of probability.
Think this isn’t going to land you in court? Ask Air Canada, whose chatbot fabricated a refund policy that did not exist and the airline was forced to honour it in court. A single hallucination turned into a legal obligation. Scale that across thousands of interactions a day and the problem stops looking like a bug and starts looking like structural exposure mixed with a leadership failure to act.
Internal use cases carry the same risk, just less visibly. An AI clone answering employee questions about strategy won’t produce a single consistent view right now. Slight variations in phrasing, context or prompt will generate different answers that all sound authoritative. Also how the employee takes that information may be different based on their training, understanding of different internal programs, policies. Alignment doesn’t improve, instead it fragments. Employees leave those interactions believing they have direction when in reality they have received one of many possible interpretations and no direct interaction which is, over the long term, demotivating and likely psychologically damaging.
Executives adopting these systems assume more access to leadership voice improves clarity, and increased output feels like progress. Underneath, a different pattern is emerging. Every additional AI interaction increases the number of decisions made inside the organisation, and total error rates rise sharply.
The problem of a synthetic executive voice
Hiring is showing the damage earlier than most functions because outcomes are visible and measurable. AI screening systems routinely filter candidates based on proxies rather than capability and the results go viral. Strong candidates who don’t match the training data profile get rejected before a human even sees them if they ever do meaning weak signals become hiring criteria because they’re easy to model. Organisations then wonder why performance drops despite more “efficient” processes.
Now add cloned leadership on top of that stack and the problem compounds. A synthetic executive voice reinforces the same patterns at scale, creating a loop where hiring, feedback and internal communication all reflect the biases of the underlying model rather than the intent of the leadership team. Culture stops being built and starts being output.
External signals already point to how quickly these systems drift. Meta has faced scrutiny over AI personas that blur simulation and reality, where interaction is prioritised over accuracy or safety. If users struggle to distinguish between a system and a person, employees interacting with an AI version of the CEO will do the same. Worst still is the hallowed discretionary effort about to fly out the window because your boss is a clone who your employees don’t emote with? The difference is simple, not just internal confusions, but also inconsistent decisions, duplicated work and slow erosion of trust.
As productivity rises, accountability is disappearing
The usual incentives are at work here, AI = productivity – or at least that’s what many are shilling without really looking at what should be altered, not just what can be AI-ified. Boards see activity and assume improvement. Few organisations track how often AI-generated outputs conflict with each other or with stated strategy. But boards need to be careful, there’s lots being assumed that they need to take control of.
A deeper issue sits underneath all of this: leadership isn’t just communication, it’s constraint, motivation and awareness. Decisions carry weight because someone owns them, stands behind them and accepts the consequences. Cloning a voice removes the constraint while keeping an appearance of authority. Thousands of decisions get made faster, and fewer decisions get owned by anyone. A recipe for disaster like when a deepfake makes an employee drop $24m.
One incorrect answer usually won’t break a company, but thousands of acceptable-looking answers that drift away from reality over time will. AI cloned employees are risky. Hiring pipelines degrade, and word gets around you are a nightmare to deal with. Not sure? Check Glassdoor, Reddit, all of which the LLMs are lapping up. Ultimately, ask yourself are you building loyalty in and outside the company because while none of this looks catastrophic in isolation, it’ll compound faster than you think.
Companies must draw boundaries with AI
Companies aren’t building tools that help people decide, instead they’re building systems that decide in their place. While that difference may sound subtle and easy to deal with, the consequences of getting those processes wrong aren’t.
Practical steps start with drawing a hard line on what AI is allowed to do. Gathering information, summarising documents, surfacing options are all fine. But the moment an output touches a person’s job, the company’s money or its direction, a human owns it before it goes anywhere. Not in theory. With their name on it.
Every decision needs a person attached, not a system. Nobody able to say who is responsible for an outcome means the process is already broken before anything goes wrong.
The consistency problem is the one most organisations aren’t tracking. Focus less on whether a single answer looks right and more on whether the same question gets the same answer when someone phrases it differently next week. Drift here is where problems start and where trust quietly disappears.
Leadership teams should treat synthetic voice as a liability surface, not an efficiency layer. Every place where an AI speaks as the company or as an executive needs clear boundaries, defined scope and oversight that matches the scale of use. Most organisations are nowhere near that right now.
Cloning executives doesn’t scale leadership. Cloning executives removes the limits that make leadership meaningful. Organisations moving fast with no one responsibility will break things and pay for them later. When error stops being visible because it’s distributed throughout a system, it becomes more expensive. Big tech might absorb that cost, but most businesses can’t.
Paul Armstrong is founder of emerging tech advisory, TBD Group, and its intelligence community TBD+