ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Managing AI hallucinations

Jan Van Hoecke at iManage describes how AI hallucination can go from crisis to management

Linked InXFacebook

It will surprise absolutely no one to learn that AI hallucinates. This unfortunate aspect of AI has already produced some real whoppers – look no further than the New York law firm that used AI to draft a legal filing that was riddled with made-up cases that did not exist, or the 2025 report Deloitte prepared for the Australian government that conjured expert citations out of thin air. In fact, an international database now tracks AI hallucinations in legal documents, and in 2025, has revealed 402 cases identified in US legal decisions to date.

 

In 2026, the AI hallucination crisis will reach a critical juncture as, on the one hand, AI (large language model) LLM technology exits the pilot phase into key legal processes in many organisations, and on the other hand, hallucinations are proving to be a fundamental “feature” of LLM technology. 

 

Fortunately, this state of coexistence doesn’t mean throwing hands up in resignation and giving up on AI. It means taking matters into your own hands, making some proactive moves, and shifting AI hallucination from a ‘crisis’ to a problem that can be safely and effectively managed.

 

 

The problem runs deep

It has now formally been shown that hallucinations are a fundamental aspect of LLM-powered AI. This is not helped by the fact that, by default, the AI comes across as confident and knowledgeable, even when it’s presenting faulty answers. If you correct it, it will sheepishly admit, “You’re right! That was a mistake. Good catch!”

 

It’s not just “innocent” mistakes that organisations need to be concerned about, either. As the LLMs get smarter and advance on the path to artificial general intelligence, controlling them becomes harder for both the model builders and the users. The AI seems to want to find the path of least resistance, which means that it will often seem to cheat and pretend that it is running through all the necessary steps to give you an accurate answer, when actually it’s taking shortcuts.

 

Given how deep these problems extend into current foundational LLM technology, there is no quick fix. Major model builders acknowledge that new technology is needed – but that a technological breakthrough is years away. So, what to do in the meantime?

 

 

Pragmatic safeguards keep risk in check

There are several practical risk mitigation strategies that organisations can take to make AI hallucinations a manageable problem. 

 

Some approaches are decidedly “old school” and process-driven, such as always keeping a human (or even two humans) in the loop to double- or triple-check any outputs the AI has produced to make sure that they are accurate.

 

Another risk mitigation strategy is simply deciding which aspects of your workflow you’ll allow AI to assist with, and which aspects should be an “AI-free” zone. The idea here is to triage your workflows from “low stakes” to “high stakes” to get a sense of how much risk is involved if AI is brought in. Using AI to draft a company memo or email, for example, is much less potentially damaging than using AI to draft a vendor contract – there’s much less downside if something goes wrong.

 

To further mitigate risk, organisations can even look to insurance policies that cover AI missteps such as hallucinations. Of course, an ounce of prevention is worth a pound of cure – so one of the best steps organisations can take is to reduce the potential for hallucinations appearing in the first place, by grounding their AI in vetted, high-quality data sources.

 

 

Trusted data boosts AI accuracy

To fully benefit from this approach, enterprises will need to prioritise data quality and their information architecture to ensure their AI-powered solutions are grounded in accurate, up-to-date, and reliable information. Simply put, you don’t want AI pulling from random information sources when it’s serving up answers to various members of the organisation.

 

Enterprises should begin by identifying trusted data sets, with something like a document management system serving as a central knowledge repository. Assigning someone to regularly curate this information ensures AI uses high-quality content. Simple process adjustments – for example, marking when contracts are “final” – help direct valuable documents to accessible locations for AI, preventing important content from being overlooked.

 

Note that the burden of getting data in order doesn’t have to fall entirely on the humans. Increasingly, AI can parse unstructured data, analyse it, tag it, and even automatically judge the importance and value of documents – turning years of neglected files into organised categories such as employment agreements, share purchase agreements, or customer leases that AI can draw upon for its responses.

 

The bottom line? Structured data is essential for effective AI, yielding higher quality and fewer hallucinations – and AI itself provides a way to achieve this at scale. It feels entirely fitting, if a bit ironic, that AI can actually be used to mitigate some of the very risk that it creates in its present form.

 

 

The path from crisis to confidence

AI hallucinations may never fully disappear, but they don’t have to remain a crisis. By combining human oversight, disciplined workflows, insurance safeguards, and structured data, organisations can turn a potential liability into a manageable risk. In this way, organisations can help confidently unlock the promise of AI and the better business outcomes it can generate while protecting themselves against the thorny downside that hallucinations present.

 

 


 

Jan Van Hoecke is VP AI Services at iManage

 

Main image courtesy of iStockPhoto.com and patpitchaya

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543