ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Governing AI Agents at Scale

it’s not Moltbook we should fear, argues Philip Miller at Progress Software. It’s ungoverned AI agents

Moltbook is getting attention because it feels like science fiction made real. It’s a social network where AI agents post, debate and exchange tactics while humans watch from the sidelines.

 

But it’s worth paying attention to. Not because of what Moltbook is today, but because of what it represents for tomorrow: machine-to-machine interactions happening at society scale. And that changes the rules.

 

Over the past decade, we learned that human social networks do not self-govern. They need identity systems, moderation, provenance and clear accountability. Agent networks raise the stakes because agents do not just communicate: they optimise, coordinate and act. Then combine a social layer like Moltbook with an automation-capable ecosystem such as OpenClaw, and you get something resembling a digital organism. Feedback loops accelerate, iterations compound, and emergent behaviour becomes inevitable.

 

According to Gartner, 40% of enterprise applications will feature task-specific AI agents by the end of 2026, up from less than 5% in 2025. Yet, Deloitte found that only one in five companies has a mature governance model in place for autonomous AI.

 

 

Identity and Provenance

If an agent can publish persuasive content, recruit other agents or influence decisions, we need to know what it is, who deployed it and what it has been authorised to do.

 

A robust agent ecosystem requires verifiable identity for both agents and operators. Cryptographic signing maintains authenticity and provenance tracking reveals the data sources, models and tools that shaped a particular outcome. When agent outputs carry structured context about their origins and reasoning paths, organisations can verify claims rather than accept them on faith.

 

 

Control by Design

Agents should not have authority without constraint; they need clearly defined permissions, scoped access and strict isolation boundaries. This matters even more given emerging threats like prompt injection and indirect instruction attacks, where an agent encounters malicious instructions embedded in external content and follows them unknowingly. If that agent has broad authority, the consequences range from data leaks to operational disruption.

 

Designing for constraint does not limit capability—it keeps autonomy operating within safe and predictable limits. When an agent needs to decide whether it has permission to act, the answer should come from a governed policy layer, not from the agent itself.

 

Auditability and Accountability

Trust scales through repeatable controls: data lineage, policy enforcement and the ability to reproduce decisions when needed. Agent systems should operate with the equivalent of flight recorders: organisations need to understand what an agent observed, how it interpreted that information, the tools it used and what action it took.

 

Early reports around Moltbook underscore why this matters. Claims of rapid growth sit alongside questions about whether content is truly agent-generated and whether the platform’s security posture keeps pace with its scale. Incidents involving exposed data and misconfigurations reflect a ‘move-fast’ approach that carries far greater risk when software is mediating trust at scale.

 

If an agent system spreads false information or makes an operational mistake, who is responsible? The platform? The organisation that deployed it? The operator? The EU AI Act’s high-risk system rules become fully applicable in August 2026. So far, over 72 countries have launched more than 1,000 AI policy initiatives. Regulators are not waiting, and neither should enterprises.

 

 

Governance Belongs in the Foundation

Identity, permissions, auditability and accountability have become architectural requirements, not features meant to bolt on later. The enterprises that get this right will build on a foundation that combines rich contextual data with semantic enrichment, policy enforcement and workflow orchestration—not as separate tools, but as an integrated layer governing how agents observe, decide and act.

 

If these foundations are established early, agentic systems like Moltbook can evolve into trusted collaborators that complement human capability, rather than systems that operate beyond human oversight.

 

We are not in an AI capability race anymore. We are in a trust race. And the organisations that win it will not be the ones building the most powerful agents, they’ll be the agents that others are willing to depend on. 

 


 

Philip Miller, AI Strategist, Progress Software

 

Main image courtesy of iStockPhoto.com and Ole_CNX

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543