ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

When AI acts, who is accountable? Agentic systems and the new layers of risk

Sponsored by AtData

The problem with shared authority is knowing who to trust

 

Linked InXFacebook

When AI acts, who’s held accountable?

 

It’s easy to treat this like a philosophical question, but it’s a practical one. More and more, online interactions are not always performed directly by a person in a single moment. A transaction might be initiated by a system, completed on your behalf and validated by another system entirely. Everything works as expected. But if you pause and ask who actually acted, and under what authority, the answer is less straightforward than it once was.

 

That’s because the action may belong to a system, while the intent belongs to a user, and the authority connecting the two may have been granted, delegated, inherited or potentially compromised somewhere along the way. Those layers don’t always stay aligned, and when they drift, the interaction itself doesn’t necessarily reflect it.

 

For a long time, this wasn’t a problem most systems were designed to solve. Digital interactions were still closely tied to individuals, and even when automation was involved it functioned more as a tool than an independent actor. If something looked legitimate, it was usually safe to assume it reflected a real person acting in real time. That assumption held because the structure of interaction and the structure of identity were still closely connected.

 

But the connection is untethering. Not in immediately visible ways, but in ways that matter. Actions are being initiated, carried out and even optimised by systems operating with varying levels of autonomy. As a result, what we’re evaluating isn’t just the interaction itself but the chain of relationships and authority behind it.

 

And once that becomes unclear, so does what we’re actually being asked to trust.

 

From interaction to authority

 

Once agentic behaviour becomes common, your model is no longer just distinguishing good users from bad users. It’s distinguishing humans, human-directed agents, benign autonomous software, compromised agents and adversarial automation, all while many of those categories may produce equally polished outputs.

A traditional fraud model might ask, “Does this application look legitimate?”

A better question in the agentic era is, “What chain of authority produced this interaction, and how much continuity can we verify across that chain?”

The value isn’t merely detecting suspicious transactions. It’s separating real continuity from synthetic coherence.

 

Because this is the subtle difference agentic systems expose. Human behaviour is often inconsistent in small, forgivable ways. It pauses, forgets, comes back. It changes devices, but within a pattern. It makes sense on a longer timeline, even when individual moments look messy. Synthetic or agent-driven behaviour can look cleaner than that. But clean isn’t the same as continuous, and coherent isn’t the same as trustworthy.

 

You can feel these changes reflected in how identity itself is starting to be framed. NIST’s work on agent identity and authorisation points to a growing need to understand not just who or what an agent is, but what authority it holds and how that authority behaves over time. In this context, identity moves away from point-in-time verification and towards something more durable. It’s less about whether an interaction looks valid in a single moment, and more about whether the identity behind it holds together across time, channels and interactions, behaving like a real person rather than simply presenting a plausible event.

 

Continuity is the new measure of trust

 

This shift is having practical consequences most fraud systems aren’t yet designed to handle. If an agent completes an action on your behalf, the system has to decide whether to trust the action, the agent, you, or the relationship between them. And those aren’t always the same thing. Authority can be granted, delegated, inherited or compromised. Once the chain breaks, the interaction itself may still look perfectly valid.

 

Point-in-time verification starts to lose its usefulness. A credential can be correct, a device can be recognised and a session can pass every expected check. Yet still none of it confirms an action aligns with the historical behaviour of the identity behind it. It tells you access is valid in the moment, but it doesn’t tell you whether the behaviour holds together over time.

 

And this is where agentic systems introduce new forms of risk. Not necessarily by looking obviously fraudulent, but by operating within what looks acceptable while drifting away from what’s actually consistent. It’s a harder problem to detect because it doesn’t present through obvious anomalies in a single snapshot. Instead, it shows up as divergence over time.

 

Which is exactly why identity has to be understood as something observed, not just asserted. The question isn’t whether an identity can present the right signals once. It’s whether it can sustain a believable pattern across interactions. Whether it behaves like a real person not just once, but repeatedly. Whether its history aligns with its present.

 

That’s where continuity turns into something measurable.

 

Not through a single identifier, but through layers of signals that reflect how an identity exists and behaves over time. Whether an email address has a history of real engagement or was created minutes ago. Whether it shows signs of ongoing activity across a network or sits idle until the moment it’s needed. Whether the identity behind it can be reached, recognised and observed consistently across interactions.

 

Signals such as email age, activity recency, engagement patterns and risk scoring start to form a timeline rather than a snapshot. They make it possible to evaluate whether an identity holds together beyond a single event, and whether it reflects a real person with continuity, or something constructed to pass checks in isolation.

 

When “looks real” stops being enough

 

That’s the difference between access and authenticity.

 

And it’s also where accountability becomes measurable again. Not by tracing a single event back to a single actor, but by evaluating whether the chain connecting them actually holds together. Whether the authority behind an action behaves in a way that’s consistent, durable, and attributable over time.

 

Because in an environment where AI can act, the risk isn’t just that something fake gets through, it’s that something synthetic becomes indistinguishable from something real. Unless you know where to look.

 

And where you look isn’t the transaction. It’s the continuity behind it.


AtData helps you measure identity continuity using real, observed signals. Learn how we help verify the identity behind every interaction, not just the moment

 

Sponsored by AtData
Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543