ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

How to Create Secure AI Environments

Cory Hawkvelt at NexGen Cloud provides a buyer’s guide to the security, sovereignty and compliance requirements of cloud computing

Linked InXFacebook

As the AI industry matures, demand is growing for tools that can operate safely in high-security and tightly regulated environments. Yet, many organisations remain hesitant to adopt AI because of concerns around compliance, governance, and data protection. Despite mounting evidence that AI can boost productivity, only 14% of firms have formally integrated AI into their corporate strategy, according to a Consultancy UK 2024 survey of financial services leaders.   

 

In the age of AI, a secure private cloud means more than protecting systems from hackers. It has three dimensions: preventing data from being accessed by foreign governments, defending against malicious cyberattacks, and ensuring that an organisation operates within the boundaries of the law. Delivering these will require not just technical excellence, but also a strategic pivot away from certain jurisdictions in order to re-establish control, instil user confidence and trust, and in the end, strengthen security. 

 

The problem is that generative AI was not originally designed with these principles in mind. Early models prioritised performance and accessibility over data control, and many lacked the zero-trust architecture that enterprises now expect. The impact of that gap is already felt. IBM reports that so-called “Shadow AI”, when staff use unsanctioned AI tools, adds an average of $670K to the cost of a data breach.  

 

As experts continue to expose weaknesses in AI systems and regulators work to close the gaps, one thing is clear. Building secure AI starts with the right infrastructure. Three pillars form the foundation of this approach: security, sovereignty, and compliance. 

 

 

Security 

Above all, the rise of AI has transformed how data is utilised and shared. When organisations interact with large language models, they’re not only storing information; they are sharing context, intent, and decision logic through every prompt and response. Do governments and enterprises truly want their most sensitive insights processed and potentially observed by foreign providers operating under different legal jurisdictions? 

 

In the age of AI, access to control means understanding who can view your data in motion, including prompts, logs, and interactions that reveal how your organisation actually works. Every exchange with a model leaves a trail that can expose business logic, strategies, or decision patterns. Without strict isolation at the infrastructure level, that insight can end up in the hands of the platform provider or under the jurisdiction of another country. Secure-by-design environments prevent that by keeping visibility and control where they belong, with the customer. 

 

Standard security should include regular checks, monitoring for unusual activity, and having a straightforward procedure to address threats, thereby minimising damage and facilitating a quick recovery. 

 

Security at its core rests on three principles: confidentiality, integrity, and availability. These are the foundations of any secure AI environment, especially for mission-critical applications. While global standards such as SOC 2 and ISO 27001 provide useful benchmarks, they represent only part of the picture. For example, NexGen Cloud is already operating in line with some these frameworks but, importantly, the company has placed a lot of focus and resources in designing and building private environments specifically tailored to its customers’ needs. From meeting clients’ exact regulatory standards, like HIPAA, GDPR and region-specific compliance mandates, to ensuring that each deployment is built to the highest security expectations from the outset. As we near 2026, the company will also be building out its ISMS and expects certification to this standard in Q1 of next year, further shielding information and users from security risks. 

 

 

Sovereignty 

“Sovereignty” just means within geographic borders – what customers really want is security. This is an emerging requirement in the AI industry, driven largely by geopolitical tensions as much as technical requirements. However, it cannot be delivered by a simple geographic solution, such as the relocation of infrastructure. Instead, we need to analyse this through a legal lens by pivoting away from other jurisdictions, such as the US, which can insulate users from legal risks, strengthen trust, and enable sovereign control over information. 

 

Following the introduction of the US CLOUD Act, many global cloud providers have tried to reassure customers by creating separate regional entities. But a legal restructure doesn’t guarantee sovereignty. If a company is headquartered in the United States, it still falls under US jurisdiction, which means authorities can request access to data stored anywhere in the world. By contrast, regional frameworks such as Europe’s GDPR or the UAE’s PDPL place strict limits on how and when data can be accessed or shared. 

 

Real sovereignty isn’t about the location of the servers, it’s about who has the legal right to reach them. In this sense, not all cloud providers offer the same level of sovereignty. Some keep data within regional borders but still depend on management systems or legal entities based elsewhere, which limits control. Others diversify their digital supply chain through colocation, or by partnering with firms in headquartered in different jurisdictions, insulating their services and customers from geopolitical shocks while also reinforcing a legal jurisdictional advantage. For sectors like healthcare, defence, finance, public services and critical infrastructure, these differences are important, and getting it right can be the difference between weak or strong systems. Strict data protection laws mean organisations in these fields need complete visibility and control over how their AI systems are deployed and managed. 

 

 

Regulation and Compliance 

Different countries are taking very different approaches to AI regulation. Some are moving quickly to encourage innovation, others are taking a stricter approach, and some are finding middle ground. 

 

Compliance isn’t about company size; it’s about regulation. Organisations in highly regulated sectors, such as finance, healthcare, defence, and public services, must meet strict legal and operational requirements wherever they deploy AI. Cloud providers that incorporate compliance and regional sovereignty into their infrastructure provide customers with the assurance that their data and workloads remain within the bounds of local regulation. This is where specialist infrastructure firms are establishing a clear market advantage. 

 

Regulation is a particularly challenging area for the AI market because legislation on cloud providers and AI developers can interact in unexpected ways. Having experts who understand how these laws impact your business is like having a lawyer on retainer, but at a significantly lower cost.   

 

 

Building the 3-Legged Stool of a Secure Cloud 

The three core principles of truly secure AI deployments security, sovereignty, and regulation – are all interlinked and together comprise the three-legged stool of a secure private cloud. Together, these can provide a stable foundation for developing AI tools in sectors with the highest levels of regulation and security, where AI can have the greatest impact. Together, they provide security, control, trust and legal insultation. 

 

Infrastructure is the key bottleneck for most AI developers operating outside of the US and China, who, together, account for around 80% of all AI expenditure. Being able to identify and deliver proper, customised AI solutions is critical to close this AI gap and democratise AI and its benefits.  

 


 

Cory Hawkvelt is CTO of NexGen Cloud, the sustainable AI cloud solutions provider

 

Main image courtesy of iStockPhoto.com and Blue Planet Studio

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543