AI’s biggest problem is that it is trained to ‘please you’, warns tech chief
The biggest risk facing AI may be that it becomes too good at telling users what they want to hear.
Anthony Goonetilleke, chief tech officer at software firm Amdocs, told City AM businesses rushing to deploy AI are vastly underestimating a more subtle and potentially more dangerous problem emerging inside LLMs.
“There’s another bucket I am more concerned about”, he said, “The bucket of AI wanting to please you”.
“I asked an AI workflow how it gave me an answer when it didn’t actually have the information,”, he added. “And it replied: ‘I gave you the answer you probably would want to hear.’ That’s how it’s trained.”
His words come as companies globally pour billions into generative AI systems amid growing pressure to prove the technology can deliver meaningful returns rather than simply automate low-level tasks.
While much of the public conversation around AI risk has centred on hallucinations, C-suites are increasingly confronting a more complicated issue in dealing with how AI models behave once integrated inside the business.
Research from McKinsey published last month found almost nine in 10 companies now use AI in at least one business function, yet 94 per cent report they are still not seeing “significant” value from those investments.
The consultancy warned many firms remain stuck in an early “productivity phase” where AI speeds up isolated tasks but fails to fundamentally improve organisational performance or profitability.
This disconnect is becoming especially acute as businesses attempt to move AI beyond open source chatbots and into critical systems.
“There’s definitely value there,” Goonetilleke said. “Where everyone is struggling is converting that into direct ROI.”
The problem, he argues, is that many AI systems are fundamentally optimised for user satisfaction rather than deterministic accuracy.
In consumer settings, that often manifests as conversational fluency or persuasive responses.
But inside enterprises, the implications become more serious, particularly in industries handling sensitive financial or personal data.
“It’s one thing to get ChatGPT to write a Linkedin post,” he said. “It’s another thing to automate mission-critical workflows.”
Business caution rises around AI adoption
After two years of aggressive experimentation, businesses are increasingly shifting focus from novelty to measurable outcomes.
At Mobile World Congress (MWC) earlier this year, telecoms and enterprise software firms emphasised “agentic AI” systems capable of executing tasks autonomously across workflows.
But executives also stressed the need for safeguards and human supervision.
Amdocs, which provides software and services to many of the world’s largest telecom operators, has spent recent months unveiling AI partnerships with Microsoft, Nvidia, AWS and Google Cloud while rolling out its own ‘agentic operating system’.
The company focuses on combining generative AI with enterprise systems and governance layers, instead of just replacing existing infrastructure outright.
“You still need business rules, governance and policies,” Goonetilleke said. “You can’t just say: ‘Your bill was high this month? Let me give it to you for free.'”
The concerns extend beyond reliability into broader questions around bias and privacy, when it comes to the use of that data.
Goonetilleke warned that AI systems trained on historical datasets can unintentionally perpetuate social and economic biases.
“It’s not that someone trained the model to be bad,” he said. “But if the data reflects inequality, the system can perpetuate it.”
The issue has become increasingly prominent as governments struggle to keep pace with the speed of AI development.
The EU has moved ahead with its AI Act, despite recent changes, while the UK has largely favoured a lighter-touch regulatory framework focused on sector-specific oversight.
But many executives privately acknowledge regulation remains fragmented and behind the curve.
“On one hand, regulators are far behind,” Goonetilleke said. “On the other hand, corporations need to do more.”
Instead, he believes governance will increasingly require hybrid cooperation between governments and major technology firms.
“The world requires a newer model,” he said. “A consortium of public and private sectors.”