ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The all-importance of explainability in AI

Jeremy Swinfen Green explains why explainability in AI is now a leadership issue

 

Linked InXFacebook

Artificial intelligence is already embedded in many everyday business decisions: which customers receive credit; which prices are offered; which CVs are shortlisted; and which products are recommended. Often, these systems operate quietly in the background, producing recommendations and other outputs that humans can accept or act upon, without any great risk to the organisation.

 

Increasingly, though, AI is being applied to situations that may be business-critical or even life-or-death. It can be used to develop new medical drugs, provide proximity detection for heavy industrial machinery, highlight when an aircraft engine needs maintenance, or diagnose diseases. In this way, AI is becoming more influential and more important.

 

As this happens, a new question is rising to the top of boardroom discussions: when an AI system produces an answer, how well do we understand it, and how much should we trust it? This question sits at the heart of explainability; and it is fast becoming a leadership responsibility rather than a technical one.

 

Explainability is not all or nothing

 

Explainability is often spoken about as if it were a single feature that a system either has or does not have. In practice, it is far more useful to think of it as a spectrum.

 

At the most basic level lies transparency. This simply means knowing that AI is involved in a system. Many organisations fail even at this first step. Decisions arrive in dashboards or reports without any clear indication that they are machine‑generated. Visibility matters because hidden automation creates surprise and distrust, especially when outcomes are unexpected or controversial.

 

The next step is traceability. This involves being able to identify which system produced an output, when it was generated and what data was used. Traceability supports accountability. If something goes wrong, leaders need to know who, and what, was responsible, and who is accountable for the failure.

 

Beyond this sits interpretability. This is where the system can give a high‑level explanation of which factors mattered most in its decision. For example, an AI model might indicate that recent customer behaviour was more influential than long‑term history. These explanations are often simplified, but they help professionals sense‑check results rather than accepting them blindly.

 

At the far end of the spectrum is understandability. This is the ability to explain, in human terms, how different inputs were weighted and combined to produce a specific outcome. It may include showing how a small change in circumstances would have led to a different decision. This level of explanation provides the strongest foundation for trust. It is also the hardest to achieve.

 

Crucially, not every system needs to reach the same point on this spectrum. What matters is matching the level of explainability to the level of risk and impact involved.

 

Why explainability matters

 

Building trust and fairness

 

Explainability is at the heart of an ethical approach to AI. Decisions influenced by AI increasingly affect people in many important ways. When customers or employees are subjected to decisions that are not explained, those decisions may feel arbitrary. And if those decisions are in some way unwelcome, then trust can be badly damaged. This is not merely a technical concern; it is about legitimacy and faith in organisational leadership.

 

Explainability supports openness and fairness. It allows people to understand how decisions are made and, where appropriate, to question or challenge them. In an era when public confidence in institutions is fragile, opaque decision‑making systems can undermine credibility and damage reputations.

 

 

Knowing when to trust

 

It is not sufficient to build trust in AI-powered systems. Instead, the goal should be to ensure the right amount of trust.

 

Professionals need to know when an AI recommendation is reliable, when it is uncertain, and when (and why) human judgment should override it. Without explanation, AI outputs can appear authoritative simply because they are precise and neatly presented. This can lead to over‑reliance, where flawed outputs are followed without question.

 

Explainability helps AI system users to understand what a system is good at, where it is fragile and how confident they should be in its conclusions.

 

Managing uncertainty and risk

 

Any business activity is likely to carry some risk. Activities that involve AI may well carry a higher, or perhaps simply a different, degree of risk. Often, this risk stems from the reduced control that humans may have over an AI-powered system.

 

From a governance perspective, explainability is an essential part of operational risk management. When AI systems behave unexpectedly, it’s important to be able to understand why. Was the data flawed? Has the operating environment changed? Is the system being used outside its originally designed purpose?

 

Without explainability, errors and biases are harder to detect and address. Problems can persist unnoticed until they escalate into operational, regulatory or reputational crises. Explainability equips organisations to manage risk proactively rather than reactively.

 

Why explainability is hard to achieve

 

Explainability is challenging precisely because AI systems have become so powerful. Many modern systems rely on techniques that identify subtle patterns across vast amounts of data. These patterns do not always align neatly with human reasoning. Even the engineers who build such systems may struggle to fully explain why a particular outcome occurred in a specific case.

 

To make things more complicated, AI systems that use machine learning (and most of today’s AI systems do) can be very hard to explain. When a system has relearned or refined the answer to a problem several times, it may be very difficult to dig into the details of the multiple decisions made and pull out the data used, the weights applied to that data, and the type of calculation used to reach those decisions.

 

Machine learning is complex. And there is frequently a trade‑off between performance and explainability. Simpler models are easier to explain but may not perform as well in complex environments. More advanced models often deliver better results but operate in ways that are difficult to unpack in plain language. Leaders are therefore faced with choices that involve balancing accuracy and speed against explainability.

 

How organisations can improve explainability

 

The most important step towards improving explainability is to treat it as a design requirement, not an afterthought. Asking early questions about who needs to understand a system, what information they need and why they need it will shape better outcomes for all stakeholders later.

 

Organisations can also benefit from adopting explanation methods that are appropriate to their context. In some cases, broad explanations of system behaviour are sufficient. In others, individuals affected by a decision (such as a rejected loan application) may need clear, case‑specific reasons.

 

Human oversight remains critical. Systems should be designed to invite review and learning. Critically, they should allow stakeholders to challenge decisions, and these challenges (which illustrate the problems that stakeholders are having in understanding or trusting decisions) should be used to build better future explainability.

 

Where explainability is difficult to achieve, it may be appropriate for human judgment to play a greater role than it would otherwise. While there will undoubtedly be cases where AI does a better job of identifying threats or managing risks than humans can, this doesn’t mean that humans should be discounted. People affected by a decision may well feel more comfortable if a human is the final decision maker – or at least appears to be.

 

A leadership responsibility

 

Explainability is not about turning AI into something simple or fully predictable. Nor is it about eliminating uncertainty. It is about ensuring that systems are transparent enough for their purpose, risks and impact.

 

As AI takes on a larger role in shaping decisions, leaders will increasingly be judged not just on what outcomes their systems produce, but on whether those outcomes can be understood, justified and trusted. In that sense, explainability is becoming one of the defining leadership challenges of the AI age.

 

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543