Kevin Cochrane at Vultr explores a critical blind spot growing more dangerous by the day: infrastructure decisions

Data Protection Day arrives this year as UK businesses race forward blind. AI adoption is accelerating faster than ever, with organisations increasing AI investments by an average of 40% over the next two years, yet the foundational question of where sensitive data actually lives remains worryingly unanswered.
The conversation around data protection has matured significantly since the UK formally adopted Data Protection Day in 2008. Identity management has become sophisticated. Security protocols are more robust than ever. Yet there’s a critical blind spot growing more dangerous by the day: infrastructure decisions.
You can’t protect what you can’t control, and you can’t control what you can’t find. Simple truth. Exponentially complex with Agentic AI.
The AI sovereignty gap
Agentic AI represents a fundamental shift in how organisations process data. Unlike traditional applications that respond to explicit commands, these systems autonomously plan and execute tasks, making decisions on proprietary data without constant human oversight. They’re helpful. They’re also creating compliance challenges most organisations haven’t accounted for.
When an AI model trains on sensitive customer data, processes financial records, or makes decisions based on proprietary business intelligence, that data needs to live somewhere. Do you know where? Do you control it?
Recent moves by major AI providers to expand UK data hosting capabilities validate what many CISOs have quietly worried about: cross-border data transfers create compliance gaps that traditional cloud architectures weren’t designed to address. When your data gets replicated across multiple global regions - often without explicit visibility into which specific jurisdictions - you’re not just dealing with operational complexity. You’re accepting regulatory exposure that could fundamentally undermine your business.
GDPR and the Data Protection Act 2018 aren’t vague on this point. Organisations must know where personal data is processed and ensure adequate protections exist. Yet 68% of organisations report ’shadow AI’ usage within their environments - employees adopting AI tools without IT oversight, often feeding sensitive data into systems where data residency is unknown.
Why retrofitting doesn’t work
Data centres have been recognised as the backbone of the UK’s digital economy, forming essential infrastructure alongside energy, water, and communications. The UK’s National Cyber Security Centre emphasises that protecting this infrastructure requires security built into the architecture itself, not added as an afterthought.
Security and data protection cannot be bolted on after systems are deployed. They must be engineered into the architecture. Fragmented responsibility, opaque control planes, and retrofitted safeguards increase systemic risk.
This principle becomes critical with AI workloads. Traditional applications process data in predictable patterns. You can audit after the fact, implement controls retrospectively, and patch vulnerabilities as they emerge. Agentic AI systems operate differently. While they work "on rails" - following pre-defined policies and workflows within set boundaries - they still make autonomous decisions on proprietary data to achieve specific business objectives. Once a model has trained on sensitive data, that exposure is permanent. The constrained autonomy that makes these systems valuable in regulated environments is precisely what makes data sovereignty non-negotiable.
The "move fast and break things" mentality that Silicon Valley innovation is fundamentally incompatible with AI development. Unlike previous technologies, where mistakes could be patched in later updates, AI systems trained on sensitive data create persistent exposure. You can’t unring that bell, and the echo reverberates throughout entire enterprises in ways traditional incident response frameworks weren’t designed to handle.
The compliance architecture question
UK businesses need to ask harder questions about their cloud infrastructure before deploying AI at scale:
With 39% of UK businesses already using AI and another 31% seriously considering it, the infrastructure decisions being made today will determine whether organisations can responsibly scale AI or find themselves unable to retroactively fix compliance failures.
Building for what’s next
Data Protection Day 2026 needs to expand beyond perimeter security, identity management, and access controls. Those elements are essential, but they’re insufficient if the underlying infrastructure wasn’t designed with data sovereignty as foundational.
The organisations that will confidently scale AI whilst maintaining regulatory compliance are making infrastructure decisions today with sovereignty in mind. Not as a checkbox exercise, but as an architectural requirement shaping where workloads run, how data flows, and which providers earn their trust.
UK businesses deserve cloud partners who understand that data sovereignty isn’t negotiable. In the AI era, protecting data starts with knowing exactly where it lives.
Kevin Cochrane is Chief Marketing Officer at Vultr
Main image courtesy of iStockPhoto.com and bowie15

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543