ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Goodbye, Latency and Centralisation. Hello, Real-Time On-Site Processing

Bruce Kornfeld at StorMagic explains how embracing a decentralised IT model with data processing at the edge can reduce latency and support AI

Linked InXFacebook

Having devoted years of effort into creating centralised data centres and moving many workloads to public clouds, increasing numbers of organisations are considering alternatives to move away from this one-dimensional approach. They are looking to redistribute computing resources closer to where they are needed most, and today, that’s often at the edge.

 

Driven initially by the necessity to improve performance, lower costs, or increase uptime, this shift has gained further momentum at small, decentralized sites because of the rapid pace of digital transformation and innovation, and AI is just making all of this happen even faster. This movement requires datacentre-like performance and reliability in very small form factors. 

 

By establishing processing resources close to the data source, on-site virtualisation overcomes the inherent latency associated with long-distance data transmission and cloud services. This is vital for applications that depend on rapid, real-time intelligent processing of vast quantities of data to deliver instantaneous responses, such as for video analysis, monitoring equipment, and retail transactions.

 

In practice, better performance could speed up retail transactions in remote locations, improving customer satisfaction and retention. Or, help to maintain physical security of premises by analysing live webcam footage, generating immediate alerts and workflows if suspicious activity is detected, like an intruder, tampering of alarms, or the presence of an unknown vehicle. Another example might be real-time processing of sensor data from machinery in remote locations that detects potential maintenance problems before a malfunction or complete breakdown happens. Such insights, along with the automated initiation of follow-up actions, would minimise repair bills and prevent damaging outages before they could impact operations more severely.

 

 

Enabling on-site decision-making

In fact, many enterprises are often dependent on making informed decisions where data is generated to ensure efficient and profitable business operations. This is predicted to be increasingly at the edge as reliance on AI grows. IDC estimates that the global spending on edge computing will be up to $378 billion by 2028 to meet the appetite for intelligent automation, real-time analytics, and to improve the customer experience.

 

For organisations that have been battling to operate effectively in remote locations, far away from data centres and hampered by the performance limitations of cloud options, this change can’t come a moment too soon.

 

However, while edge computing champions decentralisation, cloud computing - despite its drawbacks - is the foundation for many IT infrastructures. Therefore, choosing to move away from a cloud-first approach understandably raises many concerns and questions for business leaders and IT teams. There’s no doubt that it requires a candid assessment of cloud’s current and future role within an IT architecture and, potentially, a bold change of direction.

 

Even if a review suggests that a more decentralised architecture is the way forward, those lacking onsite IT systems, resources, and expertise may not be able to make changes easily, especially if already tied into costly contracts with cloud vendors.

 

In scenarios such as these, this is where incorporating a hyperconverged infrastructure (HCI) can facilitate the transition by creating a hybrid architecture that balances on-site virtualisation with centralised cloud functions.

 

 

Taking care of the latency issue

HCI wraps computing, networking, and storage resources into a single, compact system, making it ideal for edge environments where power, space, and IT resources are limited. HCI solutions have been honed with precision to meet the demanding performance and availability requirements of distributed environments. As a result, each system intelligently assigns resources for optimum efficiency, automatically preventing either over or under provisioning.

 

But crucially, unlike a traditional server architecture that runs on specialist hardware and software, HCI uses virtualisation to create a lightweight architecture that doesn’t compromise on computing power. In most instances just two servers are needed for high availability and performance instead of three or more. Having taken care of the latency problem, effectively the customer gets an enterprise-class infrastructure, but without disproportionate costs or excessive overheads.

 

Minimising server requirements also has the advantage of keeping upfront investment lower, and less hardware uses less energy, needs fewer spares, and requires little onsite maintenance - which all helps to save costs.

 

 

The sweet spot

Additionally, a modern HCI includes all the functionality needed to run applications locally plus connect to the cloud and data centre, even when they have different underlying architectures. Implementation is designed for simplicity, overcoming the need for specialist expertise with IT generalists able to set up new sites or applications in as little as an hour. Meanwhile, cloud services can continue to complement HCI by handling centralised, non-time-sensitive tasks and applications, such as batch processing, analytics, backups, development, and testing platforms.

 

Bringing together the strengths of HCI, cloud, and edge computing creates the sweet spot - an architecture that’s both powerful and adaptable. By combining the speed and local processing power of HCI on-site with the scalability and intelligence of the cloud, organisations can achieve a balance between performance and flexibility that works best for their own, specific environment. And, this hybrid approach doesn’t just solve today’s challenges around latency and data overload; it sets the stage for what’s to come as AI continues to march outwards from the data centre to the edge.

 


 

Bruce Kornfeld is Chief Product Officer at StorMagic

 

Main image courtesy of iStockPhoto.com and Eoneren

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543