Kevin Cochrane at Vultr argues that businesses don’t need trillion-parameter AI models trained on the entire internet. They need AI that fit their needs
When it comes to AI, sheer size isn’t strategy. The prevailing narrative, amplified by hyperscalers and echoed across headlines, tells enterprises they need the biggest model on the market to stay competitive. This obsession with scale distracts from what really matters: control, efficiency, and performance. Businesses don’t need trillion-parameter models trained on the entire internet. They need AI that fits their needs.
At the heart of the hype is a disingenuous bending of the axiom “innovate or fall behind.” The false dichotomy is that to innovate, you must adopt a general-purpose Large Language Model (LLM). But what’s being sold as cutting-edge is impractical. Frontier models might impress at the bleeding edge, but they were never designed to tackle everyday business problems or operate efficiently within real-world infrastructure.
We’re seeing real innovation in Small Language Models (SLMs) built from the ground up for critical tasks, where precision, governance, and deployment efficiency matter more than sheer scale.
From supersize to smart size
SLMs offer a more practical way for businesses of all sizes to use AI. These models are smaller by design, faster to fine-tune, and easier to audit. Unlike black-box APIs, SLMs can be deployed on-premises, in the cloud, or in hybrid environments — giving enterprises full control over performance, privacy and compliance.
And they scale. Not through parameter count, but through modular design. Enterprises can deploy a network of specialised models, each tuned to a particular function such as processing claims, drafting compliance documentation, or triaging customer requests, without trying to stretch a single model across unrelated use cases.
In sectors like finance, healthcare, and public services, where data sovereignty and explainability are non-negotiable, SLMs offer a viable path to secure, high-impact deployment at scale. Their power lies in being specialised, not supersized.
The infrastructure bottleneck
In the UK, small and medium-sized enterprises (SMEs), which aren’t able to pour billions into their infrastructure, account for 99.8% of the business population. The fallacy that capacity-hungry frontier deployments are essential for those enterprises may force them down the path toward inescapably enormous hyperscaler cloud bills.
SLMs open up an entirely new direction of travel. They’re often lean enough to run within existing tech stacks or in a lightweight cloud environment. This minimal footprint enables a move from model-centric AI to infrastructure-aligned AI, where deployment is feasible, scalable, and sustainable.
Closed-source LLMs have a role to play, but they aren’t the only game in town. All too often they impose on enterprises a rigid, opaque framework, creating growing concerns around escalating costs, vendor lock-in, and a troubling lack of transparency. By contrast, open-weight or proprietary SLMs empower businesses with the agility to customise, govern, and future-proof their AI investments. These models can be trained on proprietary data within secure, controlled environments, ensuring accountability and compliance remain front and centre.
This approach is not a concession but rather a hallmark of the maturity that thoughtful AI adoption demands.
Vive la démocratie
Beyond the much-discussed infrastructure bottleneck lies an equally critical challenge: the human capital chasm. The prevailing narrative, once again driven by hyperscalers, implies that meaningful AI development requires an army of PhDs and elite data scientists, a talent pool out of reach for almost everyone else. This creates a dangerous barrier to entry, concentrating AI power within a handful of tech giants and leaving other enterprises as passive consumers of innovation, not active participants.
Small Language Models dismantle this barrier. They democratise AI by design, making innovation more accessible to a wider range of technical staff. Suddenly, a company’s existing skilled developers and IT professionals can become effective AI builders, fine-tuning and deploying models that solve specific business problems.
This fundamentally shifts the dynamic from ‘renting’ innovation through opaque APIs to owning the development process. Instead of competing for scarce and prohibitively expensive specialists, businesses can empower their own teams, bridging the skills gap and fostering a sustainable culture of ground-up innovation. This is how true AI capability is built for the long term, moving past the hype to deliver real value.
The future is specialised
True AI maturity transcends sheer model size; it manifests in tangible business outcomes. Chasing the latest and greatest models often leads to unnecessary investments with limited returns, essentially offering enterprises golden handcuffs. Small Language Models offer a more pragmatic and strategically sound pathway, prioritising precision over brute force, reliable deployment over speculative experimentation, and operational control over opaque abstraction. This is the blueprint for sustainable, impactful AI integration.
The future belongs to AI models tailored to unique business needs, designed to work within existing infrastructure, and focused on delivering consistent, measurable results.
This shift marks a new era: the ‘Intelligent Age’, where AI moves from a distant, speculative frontier to a practical, reliable tool driving real-world innovation and value.
Kevin Cochrane is Chief Marketing Officer at Vultr
Main image courtesy of iStockPhoto.com and BlackJack3D
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543