ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

From shadow AI to strategic advantage

Hande Sahin-Bahceci and Simone Larsson at Lenovo Infrastructure Solutions (ISG) explore how shadow AI can expose organisations to data leakage and cyber-attacks, and explain how this problem can be managed

Over the last few years, AI has become woven into the fabric of everyday business, but as the initial excitement settles, organisations are confronting the realities of putting it to work effectively. Among the most pressing issues is the rise of shadow AI - employees independently adopting AI tools without the knowledge or approval of IT. What often begins as an attempt to move faster or work smarter can quickly introduce significant risk. Unapproved tools may expose sensitive data, fall short of regulatory requirements, or rely on platforms that lack proper security safeguards.

 

Although IT and security leaders are increasingly aware that this phenomenon exists, many still fail to grasp how widespread and complex it has become. AI capabilities are rapidly being integrated into common workplace software and development environments, making informal usage harder to detect and control. Addressing this challenge requires more than tighter monitoring. It calls for coordinated leadership across departments, clear governance frameworks, and a thoughtful balance between enabling experimentation and protecting the enterprise.

 

 

Grasping the nature of the risk

Shadow AI is no longer just a fringe concern. The threat it represents runs more deeply and broadly than is often understood. Businesses often immediately think of tools such as ChatGPT which are used by employees outside of IT’s purview, however this is just scratching the surface. In reality, the threat encompasses a wide range of risks.

 

Many enterprise software platforms now include embedded AI features that operate unnoticed, which may introduce hidden risks. Internal developers may also deploy unverified, unsupported open-source AI solutions without adequate testing or governance. Third-party or partner AI models can also be integrated into systems without proper vetting, creating additional exposure. Even when AI tools are developed in-house, poorly managed lifecycles can lead to issues such as model drift or data poisoning - a type of cyberattack that manipulates or corrupts data sets utilised by AI models. Such problems can often go undetected until they cause significant harm.

 

The magnitude of the threat landscape demonstrates that shadow AI isn’t simply an end-user issue. It represents a deeper integration of ungoverned AI throughout the enterprise tech stack, which is more complex and harder to manage.

 

 

Taming shadow AI takes a team

Protecting a business against the threats of shadow AI must not rest solely with IT. Many enterprises and SMBs lack the technical AI workforce to monitor every implementation or use case, therefore it is essential to ensure that multiple departments and personnel are prepared to share the responsibility. From CSOs to compliance teams, all individual business functions must form part of an integrated governance framework. This federated model ensures that governance is embedded where AI decisions are being made, from app development and vendor procurement to marketing analytics and HR automation.

 

The next step is for businesses to recognise that human oversight can only go so far. AI’s speed and scale demand automated governance mechanisms. For instance, AI agents can be deployed internally to monitor compliance, enforce policy and even educate employees in real time. AI governance tools must become part of the enterprise stack to scale oversight effectively.

 

 

Applying the same lessons

AI should not be treated as fundamentally different from traditional applications or software. While much of the conversation around shadow AI focuses on employees using tools without IT oversight, it is important to remember that shadow AI is a broader concept that differs from traditional ‘shadow IT’. This is largely due to the threats embedded into enterprise software platforms.

 

Mitigating these threats can be a challenge, however businesses will do well to apply the same best practices when it comes to DevOps. This means emphasising validation, testing, version control and ongoing monitoring. Establishing clear processes and best practices for IT and development teams is essential.

 

This is also where external partners can add significant value, offering capabilities such as solution validation, advisory services, automated management tools and access to proven governance frameworks that help integrate AI safely and sustainably across the enterprise. By applying software engineering rigour to AI development, enterprises can mitigate risks without stifling innovation.

 

 

Keeping control with private infrastructure

A clear distinction must also be made with public and private AI. When employees input confidential data into public AI tools, the risks are comparable to publishing that information openly on the internet. These public models often operate as black boxes, with limited visibility into how data is stored, used, or potentially retained for future model training. This can create significant concerns around data privacy and security.

 

To address this, enterprises should consider investing in private AI infrastructure. With private, domain-specific AI models, organisations can maintain full control over their data, training processes and system behaviour. These models can be continuously updated and managed within a secure environment, ensuring compliance with internal policies and regulatory requirements. Ultimately, controlling shadow AI requires the ability to govern the underlying systems themselves.

 

 

Creating a safe space to harness AI

When organisations create space to test and explore AI within clear guardrails, they unlock innovation without opening the door to unnecessary exposure. The objective isn’t to limit AI use, rather it is about identifying where vulnerabilities lie and responding with intelligent, proportionate controls.

 

That requires building an ecosystem where AI systems are visible, traceable and supported by robust operational foundations. With the right platforms in place, automation embedded into workflows, DevOps disciplines applied to AI lifecycles, and firm boundaries drawn between open and closed environments, companies can innovate with confidence. Those that strike this balance will be the ones that turn AI from a risk into a competitive advantage.

 


 

Hande Sahin-Bahceci is Infrastructure Solutions & AI Marketing Manager, and Simone Larsson is Head of Enterprise AI, EMEA at Lenovo Infrastructure Solutions (ISG)

 

Main image courtesy of iStockPhoto.com and barisonal

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543