ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Generative AI’s hype cycle

Many AI projects are failing to deliver business value, despite the hype – but organisations can get ahead by setting more realistic expectations for generative AI adoption

 

Despite the excitement, many enterprise AI initiatives have disappointed. Companies often mis-scope their projects, applying advanced models where simpler analytics would have sufficed, or rolling out systems without the governance needed to handle drift, bias and error. Poor data pipelines, lack of telemetry and inadequate human oversight compound the problem. Above all, inflated expectations have led some business leaders to imagine “digital workers” instead of carefully designed systems for well-defined outcomes.

 

Understanding the landscape

 

A useful distinction is emerging between advisory and executory AI. Advisory systems suggest, summarise and surface information. They keep a human in the loop and carry relatively low risk. Executory systems take direct action: modifying records, sending communications, triggering purchases. Both can use generative AI under the hood, but the governance requirements are fundamentally different.

 

Advisory agents resemble a skilled analyst sitting beside you. Executory agents resemble an employee with access to your bank account. The latter require very careful value and goal alignment to ensure that systems do what we want of them, not simply what we tell them to do. Without that alignment, executory systems may “work to rule”, take dangerous shortcuts or railroad others and violate boundaries for the sake of expediency. The choice between advisory and executory deployment should follow from an organisation’s risk tolerance and the maturity of its oversight systems, not from enthusiasm about what the technology can theoretically do.

 

Why scepticism is healthy

 

Scepticism about AI is growing, and that is a sign of a maturing market. Part of this correction is physical. The infrastructure demands of large-scale AI are immense, and the hardware depreciates far faster than traditional data centre assets. Chips that cost millions become obsolete within three to six years, faster than the financing terms that paid for them. Energy requirements are climbing, grid capacity is constrained and the buildout timelines are measured in years, not quarters.

 

These constraints impose a natural discipline. Organisations that planned around unlimited scaling are discovering that compute is finite, expensive and slow to provision. That reality will filter out shallow deployments and reward companies that build for durability rather than spectacle.

 

Buyers are beginning to demand evidence of control and accountability rather than accepting hype. This will slow the spread of premature deployments but encourage more durable, trustworthy systems.

 

What is actually working?

 

Some use cases are already proving resilient. Customer service copilots, limited to well defined actions and narrow domains, can reduce handling time without major risk. Document intelligence, especially retrieval augmented systems, is delivering clear returns in compliance, procurement and research. Workflow orchestration for scheduling, triage and data quality checks benefits from bounded autonomy where every step is auditable.

 

The common thread is constraint. These are not open-ended systems left to improvise. They are carefully scoped, monitored and designed to escalate when they encounter ambiguity. Resilient AI is boring AI. It does a defined job, does it consistently and knows when to ask for help.

 

Building the institutional immune system

 

Realistic adoption requires a different stance from leadership. Success begins by defining the outcomes sought and the risks to be managed, not by chasing the latest model. Careful phasing, through pilot, measurement and refinement, works better than ambitious leaps. And everything must be instrumented, with metrics for accuracy, escalation and impact.

 

The more mature phase of adoption will look less like “magic AI” and more like carefully managed infrastructure: retrieval-based knowledge layers, explicit policy engines, permissioned action routes and human supervisors with the power to intervene or override.

 

This requires building governance as seriously as building capability. Companies will need governance boards with genuine authority, published model and data cards, maintained incident playbooks and regular red teaming. Architecture should decouple knowledge, planning and execution, so that agents operate under least privilege and human accountability remains explicit at every layer.

 

Organisations that treat governance as an afterthought will find themselves managing crises. Those that build it into the foundation will find it becomes a competitive advantage, enabling faster and more confident deployment precisely because the safeguards are in place.

 

For employees, the change will be less about replacing roles and more about redesigning them. The shift is toward supervision, exception handling and stewardship of outcomes. Humans will increasingly serve as the judgment layer that AI systems lack: understanding context, weighing competing values and making the calls that require wisdom rather than computation.

 

That is the process by which empty, wavering hype matures into sustained value. The dream of AI will be achieved not through bigger models or bolder promises, but through the patient, unglamorous work of building systems and standards that deserve the trust we place in them.

 

Eleanor ‘Nell’ Watson, senior IEEE member, AI ethics engineer and AI faculty, Singularity University
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543