ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The impact of open-source AI on the enterprise

Joe Logan at iManage explores the benefits and perils of using open-source AI models

There are many legitimate drivers behind enterprise adoption of open-source AI, ranging from lower costs to more agility and faster innovation. However, enterprises adopting open-source AI must weigh the anticipated benefits against the potential risks when choosing between open-source and commercial AI models, carefully considering factors such as security, intellectual property, and liability. How best can this potential downside be managed?

 

 

Don’t be the first to jump…

Open-source AI models offer rapid, iterative updates thanks to their crowdsourced development. However, as a basic security measure – and just a good practice – enterprises should avoid immediately or automatically adopting the latest release of an open source AI model as soon as it’s available.

 

Instead, they should allow the community time to vet new versions before “upgrading” to a new release. In doing so, enterprises reduce the risk of inadvertently introducing security vulnerabilities and privacy concerns into the organisation. This approach balances speed with safety, enabling organisations to benefit from innovation without sacrificing security or reliability.

 

 

…but don’t be a laggard, either

Just as enterprises shouldn’t rush to adopt every new open‑source model release, they also shouldn’t fall too far behind. When newer versions address known vulnerabilities – whether related to model integrity, hallucination rates, prompt‑injection defences, bias, or other reliability issues – staying on an outdated version can introduce unnecessary risk.

 

The goal is to find the right pace: not upgrading reflexively, but not allowing known weaknesses to linger. Organisations should evaluate each new release through a risk‑based lens, weighing what is known about the previous version’s shortcomings against what the updated version resolves. In some cases, moving forward quickly reduces exposure; in others, a more deliberate approach is warranted.

 

Ultimately, enterprises need a structured process for determining when to advance to a newer model version – one that aligns upgrade decisions with the specific risks, use cases, and operational requirements at hand.

 

 

Test for efficacy

Finding the right upgrade cadence is only part of responsible handling of open source AI. Organisations must also rigorously validate whether a model is fit for purpose. Put simply: is it actually doing what it’s supposed to do, and is it performing as intended?

 

Defined testing is essential to accomplishing this goal. That starts with having a clear, well-defined use case for the model – whether it’s a document-comparison exercise or a financial analysis task – and then stress-testing the model against that use case.

 

Critically, there should be transparency into the model’s reasoning – how did it arrive at a particular answer? The model’s reasoning path should be visible and auditable – allowing human reviewers to follow the logic, scrutinise the outputs, and independently verify that results can be reproduced without AI involvement. That’s the only way to build a sufficient comfort level with the model’s capabilities.

 

 

Avoid any liability landmines

Liability is another critical consideration. Open‑source models vary widely in how they are trained, what data they rely on, and what rights accompany that data. Without clarity on those factors, organisations risk inheriting copyright exposure or other legal obligations they did not anticipate.

 

Enterprises should ensure that any open-source model they adopt comes with transparent documentation of its training data, licensing terms, and permitted uses. Even in open‑source ecosystems, models are typically governed by Ts & Cs that define what the developers have the right to train on – and what downstream users are allowed to do with the outputs. Those terms need to be reviewed carefully to understand where liability sits and how it may affect the organisation’s risk posture.

 

This becomes especially important when model outputs contribute to proprietary work. If an AI system plays a role in developing an organisation’s intellectual property, leaders must understand whether the model’s licensing terms could create claims on that IP or introduce restrictions on how it can be used. Clear governance around training data rights, model licensing, and downstream use is essential to ensuring that open‑source AI strengthens – rather than complicates – an organisation’s legal position.

 

 

Balancing innovation with responsibility

While the downside risks of open-source AI can be managed, there will be cases where enterprises determine that a commercial, licensed model is the more appropriate choice. Closed‑model tools can offer clearer guardrails, stronger assurances around data handling, and a more predictable liability profile – all of which may be essential in highly regulated or high‑stakes environments.

 

But risk management should not come at the expense of progress. Organisations still need room to experiment, iterate, and explore new capabilities. The key is to pair innovation with discipline: understand where open-source models create opportunity, where they introduce exposure, and where a different approach is called for. The art of the possible is not only about what these systems can do – it’s also about anticipating where they may fall short and having a well-thought-out plan for how to move forward with confidence, safely and responsibly.

 


 

Joe Logan is CIO of iManage

 

Main image courtesy of iStockPhoto.com and BlackJack3D

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543