Decision Intelligence is the natural next step for AI in regulated environments, argues Jamie Hutton at Quantexa

Boards have stopped debating whether to invest in AI. They have instead started asking why they cannot point to the technology’s value.
It is a question that is proving harder to answer than expected. After years of experimentation with copilots, text generation and conversational agents, most enterprises do have something to show in terms of AI activity. What they increasingly struggle to demonstrate is accountable impact. Which decisions have improved? How are those decisions being overseen? And what, precisely, is the return?
This accountability gap is not a technology problem. It is a decision architecture problem.
Deploying AI models has become routine. Connecting those models to governed, measurable business decisions has not. Gartner has put a figure on the cost of that gap: by 2027, 25% of ungoverned decisions made using large language models will result in financial or reputational harm. The drivers include embedded model bias, excessive automation reliance, and AI sycophancy, where systems produce responses designed to satisfy rather than to be correct.
The solution lies in finding a more deliberate framework for how decisions get made, monitored, and owned.
The scale problem no one talks about enough
Ask most technology leaders whether AI is a priority, and the answer is yes. Ask whether they have achieved value at scale, and the picture changes. More than 80% of UK CEOs identify AI as a top investment priority, yet fewer than 10% believe they have realised meaningful returns at scale. That gap is where the real story of enterprise AI is playing out.
The difficulty is structural. Pilots are designed to succeed; they run on clean data, involve small teams, and sidestep the messy governance questions that real production environments demand. When organisations try to move beyond proof-of-concept, they run into fragmented data infrastructure, unclear accountability structures, and inconsistent standards that no pilot ever had to contend with. Around a third of AI pilots are discontinued as a result, most commonly due to data quality failures or an inability to demonstrate business value.
Agentic AI, systems capable of acting autonomously, is intensifying the pressure. As these tools begin to initiate decisions rather than merely inform them, alignment with data protection frameworks such as the UK Data (Use and Access) Act 2025 becomes a genuine operational challenge. Fewer than one in five organisations currently have comprehensive AI governance frameworks in place, creating significant exposure.
The missing ingredient, consistently, is traceability. If an organisation cannot follow the path from input data to model output to final decision, and identify who is accountable at each stage, it has no foundation for governing AI at scale. And without that foundation, scale becomes a liability rather than an asset.
Speed is not the same as progress
Part of what makes this difficult is that AI genuinely is getting faster and more capable. In the UK, nearly 40% of businesses now use AI in some capacity, with adoption in data-intensive sectors exceeding 80%. Interfaces are more intuitive, responses more immediate, and the case for embedding AI into day-to-day operations is increasingly compelling.
But the very quality that makes modern AI attractive, speed, is also what makes governance urgent.
When decisions are made faster, the window for human scrutiny narrows. Research consistently shows that frequent AI users are more likely to defer to the tool rather than interrogate it, a pattern that becomes particularly consequential in high-stakes operational contexts. Across functions, legal, financial, strategic, operational, the risk is the same: that execution velocity quietly outpaces human judgment.
Regulatory frameworks are responding to exactly this concern. The EU AI Act now requires organisations to demonstrate how automated decisions are made, monitored, and controlled. The UK government’s AI Skills Boost initiative, targeting 10 million trained workers by 2030, reflects a parallel acknowledgement that human capability needs to grow alongside machine capability.
The implication is straightforward: AI effectiveness can no longer be measured by how fast systems respond. It must be measured by the quality and accountability of the decisions they support.
Making explainability operational
Explainability has spent too long being treated as a compliance requirement. In practice, it is one of the most operationally valuable capabilities an AI system can have.
Consider what organisations gain when they can trace the full journey from raw data through to a final decision: regulatory defensibility, faster and more confident audits, higher trust among the people acting on AI outputs, and the feedback loops necessary for continuous improvement. These are not peripheral benefits; they are core to whether AI delivers durable value or simply generates activity.
The "mastery gap" captures the problem well. When professionals cannot understand why a system has produced a particular output, they tend to disregard it. Expertise and AI operate in parallel rather than in combination, and the value of the system erodes. When AI can explain its reasoning in terms that are meaningful to the people using it, that gap closes and measurable performance tends to follow.
This also has direct implications for how ROI is calculated. Volume metrics are not evidence of impact. A system that produces more flags may simply be producing more noise. Meaningful measurement asks different questions: are decisions being made with greater accuracy and confidence? Is operational risk decreasing? Are compliance issues being identified earlier? Are investigators or analysts closing cases faster?
ROI in AI should be denominated in decision outcomes, not system statistics.
Decision Intelligence as the next phase of Enterprise AI
What ties all of this together is decision intelligence: an approach that integrates decision modelling, analytics, and AI into a single governed architecture. Rather than treating AI as a standalone capability that generates outputs, decision intelligence makes decisions themselves the unit of design, execution, monitoring, and accountability.
The distinction matters. AI predicts. Decision intelligence governs.
In competitive, regulated, and rapidly shifting markets, that governance capability is what determines whether AI investment compounds over time or plateaus. Organisations with a decision-centric architecture can anticipate disruption rather than absorb it, manage risk at scale rather than firefight it, and build systematically on what they learn. Those without one tend to find that AI delivers well in controlled conditions and inconsistently in everything else.
As systems become more autonomous, this becomes less of a strategic preference and more of a baseline requirement. Governance cannot be added retrospectively. It has to be embedded in the decision fabric before deployment, not after it.
The enterprises that get the most from AI will not necessarily be those with the most sophisticated models. They will be the ones who have built the clearest accountability around how decisions get made.
In the end, that is what creates enterprise value, not the algorithms, but the decisions they support.
Jamie Hutton is CTO at Quantexa
Main image courtesy of iStockPhoto.com and patpitchaya

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543