ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Why the AI opportunity is built on a currency of trust

One year on from the UK’s AI Opportunities Action Plan, and Alex Laurieat Ping Identity explains why identity security is critical to its success

Linked InXFacebook

The 2025 AI Opportunities Action Plan was a deliberate move from the UK government to accelerate the country’s position as a global leader in AI innovation. One year on, it’s an ambition that’s beginning to materialise. 

 

Focusing on three critical enablers: sovereign compute, access to public sector data and a pro-innovation regulatory environment, we’ve already seen measures put into place that have strengthened the UK’s innovation backbone through the expansion of the Sovereign AI Research Resource. The development of the National Data Library has broken down silos across public sector data, and the appointment of the UK’s first Chief AI Officer has added much needed oversight and leadership. 

 

As the UK actively lays the foundations of its AI economy, progress is evident. A year ago, infrastructure posed a significant challenge in the scaling of this plan; however, it’s no longer the main constraint. Instead, infrastructure without identity is now providing an open door. While the government has spent 12 months building its AI future, threat actors have spent that same time perfecting the tools needed to exploit them. 

 

As the national plan advances, a clear “Identity Gap” is widening. Without a robust identity framework, AI adoption is creating unmanaged risks that are eroding digital trust and the time to address this challenge is now.

 

 

Securing every step 

AI is increasingly being deployed within organisations to drive productivity, becoming a permanent operating layer that boosts efficiency and streamlines operations. 

 

But with these benefits come new, enterprise-wide security threats. Deepfakes and automated social engineering have ended the age of visual trust. Video calls can be convincingly faked, voices cloned and the traditional methods once relied upon to verify a person no longer suffice. As such, conventional perimeter security is obsolete. 

 

The AI-era is forcing organisations to shift from point-in-time authentication to continuous identity assurance. By using AI-driven behavioural signals – analysing patterns like device telemetry, interaction velocity and navigation habits – identity can be verified in real time. 

 

The commercial benefit is twofold: friction is reduced for legitimate users, improving their experience, while organisations gain the ability to detect account takeovers that traditional multi-factor authentication (MFA) would miss. 

 

 

Governing an agentic workforce  

The challenge goes beyond verifying humans. Since the launch of the Action Plan, the rise of autonomous AI agents has transformed the identity landscape. Organisations must now manage people and their digital proxies. 

 

Acting as a "synthetic workforce," these non-human identities query databases, execute transactions and operate with the same privileges as human employees. For many UK businesses this creates a machine identity blind spot. Over-privileged AI agents can exfiltrate data at machine speed, far faster than even the most responsive human security operations centre (SOC) can respond. 

 

As such, every AI agent must be treated as a fully managed identity. This requires "Know Your Agent" (KYA) protocols: verifiable cryptographic identities, strictly enforced least-privilege access and a centralised "Identity Kill-Switch" capable of immediately revoking access when anomalous behaviour is detected. 

 

 

New year, new identity priorities 

The need for continuous identity assurance and challenges around synthetic identities are only set to grow over the next 12 months, something the UK must anticipate and establish tactics against. There is no better time than now for the government to commit and take a proactive stance. To do so however, we must commit to three fundamental shifts as part of the AI Opportunities Action Plan.  

 

First, a consistent, national approach to registering and tracking AI agents must be established. Without a clear view of ownership and responsibility for autonomous processes, accountability can quickly break down.  

 

Second, security models must evolve beyond static, one-time authentication. In an AI-driven environment, identity assurance needs to occur in real time — continuously validating both human and non-human actors using low-friction signals that don’t disrupt productivity. 

 

Finally, the only effective way to stop a rogue AI or a compromised account is the immediate, automated removal of its digital credentials. Identity is the only remaining off switch in a decentralised AI ecosystem and governance must be made a priority.  

 

 

Security trust to unlock AI’s potential 

The promise of AI is immense, and while it’s promising to see the UK take its role in AI innovation and development seriously, the value of this technology will hinge on one critical factor – trust. As the UK’s AI Opportunities Action Plan continues to lay the foundations for global competitiveness, the real differentiator will be organisations embedding resilient, secure identity frameworks into the heart of their AI strategies. 

 

Long-term success will not be awarded to the fastest innovator, but the most trustworthy. It’s now time for the UK to act and see that securing identity is not just a technical requirement, but the foundation of competitive advantage in the decades to come. 

 


 

Alex Laurie is GTM CTO at Ping Identity

 

Main image courtesy of iStockPhoto.com and imaginima

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543