Law isn’t like other professions; misusing AI may put lawyers in jail

The legal sector has been slapped with the reality that when AI goes wrong, they will be liable for it, and not just financially or reputationally, but to the extent that a misuse might also cost them their freedom.
Last Friday two High Court judges explicitly warned lawyers they may face severe criminal sanctions if they misuse or fail to verify AI-generated legal material provided to the court.
The court had to review two cases where lawyers relied on citations and quotations generated by AI tools—some of which were entirely fictitious.
In one of the cases, a lawyer submitted a witness statement with 45 legal citations, of which, 18 were deemed completely fictional. The lawyer in question said they relied on their client, who used AI tools for legal research, and they failed to verify the citations.
While the court did not pursue contempt or criminal action against the lawyers in question, it did impose serious professional repercussions on them, including referring them to the legal regulator.
The senior judges took the opportunity to tell lawyers, “The court has a range of powers to ensure that lawyers comply with their duties to the court”, including referring matters to regulators, imposing wasted costs, and contempt or even criminal proceedings.
“This High Court judgment lays bare the dangers of using AI in legal work. We need to ensure the risks are properly addressed,” stated Ian Jeffery, CEO of the Law Society.
It is ‘lazy lawyering’
It was clear that the use of AI does not diminish or dilute a legal professional obligations of honesty, integrity, and competence.
As Daniel Astaire, managing partner at Grosvenor Law, said: “Just as one would always check work product from a junior, so too must AI work be checked if used as a reference source, and not be relied upon as an end product. That is lazy lawyering.”
“Before AI came along, lawyers were responsible for the end product they produced. Now that AI is here, they still are,” added Natalia Chumak, partner at Signature Litigation.
Like most businesses, the legal industry has faced significant pressure to adopt AI, with announcements after announcements of new tech tools being implemented.
Driven by the desire for cost and time savings mixed with a fear of being left behind, AI has been the main talking point this past year. It was even in the spotlight at the SXSW Conference and London Tech Week.
AI does have its benefits. As Colin Witcher, barrister at Church Court Chambers, “AI may, when properly used, have a purpose in litigation, such as case management, key word searches, and the mirroring of documents.”
However, this judgment is a stark reminder that lawyers can’t be behind the ‘it was the AI tool’s fault, not mine’ defence.
“If you’re signing off on hallucinated case law without checking your work, the problem isn’t AI. If AI is the scalpel, it’s the surgeon we should be scrutinising,” explained Richard Cannon, partner at Stokoe Partnership Solicitors.
The rush to implement this new technology is revealing numerous issues, as evidenced by a recent incident. Just a few days ago, Sky’s deputy political editor, Sam Coates, revealed that ChatGPT had lied to him.
However, unlike others professions, lawyers have even more obligations than most, including owing duties to the court. “For lawyers, the danger is not just embarrassment in open court; it’s regulatory censure, reputational damage, and the erosion of trust in the legal profession itself,” Cannon added.
But lawyers can’t say they haven’t been warned as the judiciary has now spelt it out to them, if you sign off on ‘hallucinated’ case law without checking, you will be in big trouble.
Eyes on the Law is a weekly column by Maria Ward-Brennan focused on the legal sector.