ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Redrawing enterprise blueprints: AI agents mean new standards

Oana Beattie at Kyndryl explores the realities and best practices behind managing teams of agents alongside human workers

As AI agents move from pilots into scaled live operations, the real challenge is no longer model experimentation - it is enterprise redesign. Organisations will have to rethink how work is delegated, how decisions are governed, how systems are orchestrated and where accountability sits when autonomous systems begin to act at scale.

 

Kyndryl’s Readiness Report found 87% of business leaders expect that AI will completely transform roles and responsibilities, yet only 29% felt their workforce is ready to use AI effectively, with 62% claiming they are still in the experimentation phase of AI deployment.

 

This shows that the gap between ambition and readiness is becoming harder to ignore and that success with agents depends less on raw model capability and more on enterprise alignment. Alignment between business intent, decision rights, data access, sovereignty, workflow design, governance and human oversight. Without alignment, agents will be limited in their ability to scale value, and they will introduce new risks.

 

Let’s be specific with what we mean regarding agentic AI and operational complexity and how this shows up in practice. Agentic AI is not simply generative AI with a more complex interface, but AI that can plan, act, coordinate and adapt across multi-step workflows with limited human intervention. This makes it materially more powerful but also materially harder to govern. Enterprises are using autonomous agents to auto-generate compliance reports, assist DevOps pipelines, optimise supply chains and accelerate forecasting - plus more.

 

But agentic solutions for small problems tend to use contained data, interacting with a limited number of systems. With scale comes real-world complexity, more systems, integration points, operating variance and governance demands. We’ve seen pilots flounder because scaling AI forces enterprises to confront architectural and organisational debt.

 

This is why the enterprise conversation is shifting. The issue is no longer whether AI can generate useful outputs but whether organisations are ready to operationalise systems that can act, trigger downstream consequences and interact across fragmented estates of data, platforms and controls.

 

 

The scaling challenge

Scaling AI agents exposes four forms of enterprise strain. First, data strain with sensitive and unstructured information is becoming more broadly accessible, reusable and vulnerable. Second, integration strain where every new agent increases dependency on existing platforms, processes and systems interfaces. Third, operational strain as more autonomous components interact, complexity compounds and failure models multiply. Fourth, governance strain where oversight models built for static systems struggle to keep pace with dynamic, adaptive behaviour.

 

Successful management comes from simple best practice processes:

  • Design explicit decision boundaries: define what agents can decide, what must be escalated and what remains firmly human led.
  • Engineer orchestration at scale: multi-agent systems require coordinated workflows, shared context and control points to avoid drift, duplication and compliance gaps.
  • Build intervention into operating models by design: supervisory control should not be a reactive, emergency-style response. It should be designed from the start through thresholds, alerts, approvals and kill-switch and roll-back mechanisms.
  • Anchor accountability in named roles and systems or record: if a decision cannot be traced, challenged and defended, it is not production ready.

Such accountability models help avoid a lack of responsibility, ensuring that agents have defined and responsible owners behind the automation. Human fallback accountability specifies when humans must take back control or validate agent decisions based on confidence scores or with ethical or reputational risk.

 

 

Observability and orchestration are non-negotiable

For enterprises, telemetry, orchestration, real-time monitoring and AIOps are now baseline operational requirements. As agentic AI becomes embedded in workflows, rigorous monitoring is essential.

 

Telemetry must move beyond uptime and response rates. Enterprises need visibility into behaviour, intent alignment, workflow dependencies, exception patterns and policy by code adherence. In other words, leaders need to know not only whether an agent is working, but if it is working as intended, within agreed limits and controls, as well as if they are operating within the agreed risk appetite.

 

This creates a new leadership balancing act. Too much autonomy without control introduces unmanaged risk. Too much control without autonomy slows the value realisation. The goal is not unrestricted freedom or strict lockdown, but bounded autonomy where agents are able to operate at speed with clearly enforced policies and controls, with escalation paths and explainability requirements built in.

 

Strategic partnerships are becoming part of the architecture. No single organisation can independently solve for orchestration, integration, governance, platform interoperability and operating model redesign at the pace that the market now demands. The most effective partnerships need to be co-engineered around shared accountability for outcomes, resilience and speed to value.

 

Crucially, organisations must create time for alignment. That means early collaboration in design cycles, governance forums and cross-functional roadmapping. Internal and external stakeholders must be engaged from the outset: architects to validate the stack, engineers to scale it, and risk leaders to ensure compliance. Co-ordinated input reduces the chance that agentic initiatives stall, regardless of how advanced the technology may be.

 

 

Pull together to put it together

The promise of AI agents will not be realised through experimentation alone but by organisations willing to redesign the systems around them, including the governance, architecture, operating model and leadership disciplines that turn autonomy into enterprise value.

 

Pacesetter organisations recognise that to realise value at scale, they need to start building enterprises that are structurally ready to run AI native operations.

 


 

Oana Beattie is VP Data & AI UK & Ireland at Kyndryl

 

Main image courtesy of iStockPhoto.com and MF3d

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543