ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI power, human responsibility

Linked InXFacebook

Dr Megha Kumar at CyXcel explains why AI governance matters

 

As emerging technologies continue to reshape the business landscape, they bring with them unparalleled opportunity but also a growing spectrum of digital risk. Chief among these is Artificial Intelligence (AI). AI is a transformative force that is redefining everything from workplace automation to strategic decision-making. Yet, as organisations harness AI to drive efficiency and innovation, the same technology is being weaponised by threat actors to launch more advanced cyber-threats.

 

In addition to the malicious use of AI to target specific organisations, we also have the supply chain to take into consideration. Cyber-attacks no longer respect organisational boundaries. They creep in through third-parties, fourth-parties and sometimes even via seemingly harmless service providers. With industries now interconnected on a global scale, this is especially dangerous. And an AI-enhanced attack on even a minor supplier can expose critical data and disrupt operational continuity.

 

In the race to stay competitive, business leaders must now navigate a delicate balance, leveraging AI’s potential while at the same time reinforcing their defences against its malicious use by developing the right governance practices.

 

 

Organisational readiness is lagging behind

The National Cyber Security Centre (NCSC) issued a stark warning earlier this year.  The speed at which cyber-criminals and nation-state adversaries are adopting AI tools and techniques is outpacing the defensive capabilities of most security teams. The imbalance is growing, and with it, a new form of digital divide is emerging. It separates those who can harness AI’s full potential while securing their ecosystems from those who will remain exposed to its dangers.

 

Given the pace of AI integration and the evolution of the AI threat and regulatory landscapes, organisations need to ensure they are risk-planning effectively. However, early indicators show that many businesses are lagging behind.

 

Our research reveals that although around a third (30%) of UK firms name AI as among their top three risks, a similar share (31%) has no governance policy in place. A further 29% say they have only just implemented their first AI risk strategy. Nearly a fifth of US and UK firms we polled admit they’re still not prepared for data poisoning attacks, where an adversary compromises a training dataset used by an AI model, or for a deepfake incident.

 

This disconnect between the perceived importance of AI and the actual investments made to manage its risks suggests that many organisations still view AI as a siloed function, confined to innovation teams or IT departments, rather than as a core operational risk. This needs to change and fast.

 

 

Laying the foundation with governance

To do this, business leaders need a structured and scalable approach to managing AI. At the heart of this should be AI governance. This is not just a box-ticking compliance measure; it should form a living, strategic framework.

 

Effective AI governance must be enterprise-wide in scope. Too often, governance frameworks focus on niche applications – for example, AI ethics guidelines for HR chatbots. Instead, governance should span every function and business unit.

 

A forward-looking lens should also be adopted for effective AI governance, anticipating emerging risks such as synthetic media, autonomous agents and AI-driven market manipulation.

 

Crucially, AI governance must address two risk categories: the risk posed to AI systems, such as model corruption and infrastructure compromise, and the risk posed by AI, such as automated phishing attacks. These risks are not mutually exclusive; rather, they are tightly coupled within today’s interconnected threat landscape.

 

Moreover, AI and cyber-risks cannot be siloed from broader business risk frameworks. Cyber-security, once seen as the domain of IT, is now firmly a board-level issue. The same is true for AI. As the technology becomes embedded into core systems, supply chains, CRM platforms, decision engines and data analytics, its failure or exploitation becomes a strategic threat to the entire enterprise.

 

 

The human factor

Organisations also need education and awareness. Risk managers, C‑Suite executives, security teams and all employees must be trained to recognise AI risks.  There needs to be Board and executive-level training, which will provide insight into the strategic and legal significance of AI risk, but also awareness around decision-making frameworks to consider outcomes like reputational damage from deepfakes or breaches.

 

The technical teams will need specialised training in secure model deployment and cyber‑AI overlap. Whereas employees need insight into how to recognise AI-powered phishing, deepfake videos and related social engineering threats.

 

 

The future isn’t fearing AI; it’s securing it

The AI revolution has the potential to unlock immense value. Despite the risks, AI remains one of the most powerful enablers of innovation in modern history. From automating tedious workflows to discovering breakthrough insights from data, the productivity benefits are real and significant. Yet, without the appropriate controls, these same tools can be turned against the very institutions that adopted them in good faith.

 

However, the future isn’t about fearing AI; it’s about securing it. Success lies in recognising that AI’s benefits and risks are two sides of the same coin. Forward-thinking organisations must invest in both innovation and governance. They need to ensure that as they embrace the opportunities AI offers, they are also protected against its misuse. This isn’t just a matter of competitive advantage; it is a question of survival.

 


 

Dr. Megha Kumar is Chief Product Officer and Head of Geopolitical Risk at CyXcel

 

Main image courtesy of iStockPhoto.com and cagkansayin

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543