Murali Sastry at Skillsoft explores the security and governance imperatives of agentic AI adoption
Agentic AI is transforming how organisations operate, offering hyper-personalised experiences, real-time decision-making, and operational optimisation. Unlike traditional AI systems that execute predefined tasks, agentic AI can autonomously plan, adapt, and act on goals in dynamic environments.
But as with any powerful technology, it introduces new risks, especially around governance and security.
However, as with any emerging technology, agentic AI introduces a new set of risks, particularly in terms of security, governance, and organisational control. While conversations around agentic AI have focused on productivity and innovation, a more urgent task lies ahead: ensuring these systems remain secure, compliant and aligned with organisational values.
So, what can organisations do to ensure they are implementing agentic AI securely and how can they prepare their workforce to work alongside it?
A shift in autonomy
The reality of agentic AI is that it doesn’t just follow rules, it interprets, adapts and acts. This autonomy allows organisations to unleash powerful new capabilities, but it also raises critical questions around who controls what the AI can access, how sensitive data is protected and what governance frameworks are in place to prevent misuse.
Agentic AI systems often require greater access to enterprise data to function effectively, but with that comes the risk of data leakage, misuse or exposure. Without robust data governance, these agents may inadvertently access personal information, intellectual property or regulated data and use it in ways that go against internal policies or external regulations. The potential for harm, whether reputational, financial or legal, makes it vital that organisations establish clear boundaries and controls.
Taking accountability and control
The autonomy of agentic AI also complicates traditional models of accountability within organisations. When an AI agent plans, such as reallocating resources, generating a customer response or initiating a workflow, it’s difficult to determine who might be responsible for the outcome. The blurred line between human oversight and machine agency can create uncertainty and risk.
To address this, organisations must implement clear governance frameworks that define roles and responsibilities when it comes to the use of agentic AI within their organisation. Ethical guardrails must also be embedded into agent behaviour to ensure alignment with organisational values and regulatory requirements.
The human factor
While technical controls and governance frameworks are essential, they are only part of the puzzle. The most overlooked vulnerability in agentic AI adoption is a workforce unprepared to manage these capabilities.
The biggest risk in cyber-security often comes from individuals inside the organisation, and the area of agentic AI is likely to be no different. With access to agentic tools that have a company-wide reach, team members can inadvertently create exposure at an individual, group, or company level. Successful integration of agentic AI hinges not just on technical deployment, but on human knowledge and readiness.
To mitigate this, organisations must invest in both technical fluency and power skills. Technical fluency includes prompt engineering, data governance and compliance awareness, skills that ensure employees understand how to use agentic tools safely and effectively.
Alongside this, power skills such as critical thinking, ethical reasoning and digital discernment can help employees evaluate AI outputs, question assumptions and make informed decisions.
These skills are not just complementary to technical controls, they are essential to ensuring that agentic AI is used responsibly, ethically and in alignment with organisational values.
Creating a culture of continuous learning
Organisations must build a culture of continuous learning and development for both leadership teams and employees to successfully deploy agentic AI. This involves prioritising reskilling and upskilling to meet evolving demands, while also committing to the structured development of human-centric capabilities, like communication, adaptability and ethical judgement.
One of the most effective ways to build these capabilities is through realistic workforce scenario-based learning. These experiences allow leaders and teams to practice critical power skills in a safe and supportive environment, helping them to build confidence and apply insights in real-world contexts.
Upskilling is no longer optional - It is the first line of defence. Learning and development (L&D) strategies must evolve to train both human and AI agents they will oversee, ensuring that employees can orchestrate AI agents responsibly in alignment with company values and regulatory compliance. Agentic AI provides powerful capabilities to team members, and business executives are motivated to capitalise quickly on agentic AI opportunities, underlining the importance of proceeding with both speed and caution.
Organisations that prioritise upskilling as part of their adoption of agentic AI will be better positioned to mitigate risks, maintain compliance and unlock the full potential of agentic AI as a secure, strategic asset.
Working with the right frameworks in place
Organisations are eager to harness the full potential of agentic AI, but they need to balance implementation with compliance. Rushing into deployment without a clear governance framework and a prepared workforce can lead to costly mistakes, data breaches, compliance violations or reputational damage.
The organisations that succeed will be those that move quickly, but thoughtfully, building the infrastructure, policies and human readiness needed to support safe and scalable agentic AI adoption.
By embedding governance into the foundation of agentic AI deployments, organisations not only avoid risk; they lead with integrity and vision. The future belongs to enterprises that combine speed, responsibility, and continuous learning in their AI journey.
Murali Sastry is SVP and Head of Technology at Skillsoft
Main image courtesy of iStockPhoto and Sansert Sangsakawrat
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543