Sam Peters at ISMS.online explores a regulatory lesson for tech providers
WhatsApp’s encryption capabilities have been a key part of its meteoric rise, securing nearly three billion users through its promise of a private, secure space for conversations between real people.
The platform’s end-to-end encryption has become synonymous with digital privacy and made it the messaging app of choice for huge numbers of people – from concerned parents to high-profile executives.
But Meta AI’s new chatbot feature is raising uncomfortable questions about how committed the company truly is to protecting user privacy.
The chatbot, whilst technically optional, cannot actually be removed by users. Perhaps more jarring is Meta’s explicit advice that users should not share anything sensitive with the AI assistant – a recommendation that arguably contradicts WhatsApp’s founding promise of security and discretion.
The broader Big Tech trust deficit
This fundamental disconnect is fuelling a growing sense of distrust towards a platform that has become essential infrastructure for global communication, but WhatsApp’s privacy contradiction reflects a broader reputational crisis facing Big Tech.
The pattern of pushing innovation without meaningful user consent, once celebrated as Silicon Valley’s ‘move fast and break things’ philosophy, now creates legal and ethical friction that threatens long-term sustainability. When AI features are rolled out not because users demand them, but to pre-emptively secure future market dominance, the trust gap inevitably widens.
This erosion extends beyond Meta. The fundamental issue lies in Big Tech’s approach to innovation that prioritises competitive advantage over user agency. Companies repeatedly introduce features that serve business interests while sidelining user preferences, and this is creating a pattern of behaviour that undermines the very relationships these platforms depend upon for their success.
Apple and Meta’s rejection of the EU’s AI safety pact in September 2024 sent a clear signal about their priorities and willingness to engage with regulatory frameworks.
The EU AI Pact represents more than bureaucratic box-ticking – it embodies a fundamental shift towards accountability in artificial intelligence development. Through promoting AI awareness, identifying high-risk systems and adopting governance strategies, the pact offers a roadmap for responsible innovation.
The refusal to participate suggests a preference for unilateral decision-making over collaborative governance, which is precisely the attitude that has eroded public trust. While over 100 companies, including Amazon, Google, Microsoft and OpenAI, signed the voluntary pledge to develop safe and trustworthy artificial intelligence, other major players such as Anthropic and TikTok also declined to join, highlighting the industry’s divided stance on voluntary regulation.
Microsoft makes a move
Microsoft, at least, appears to recognise this challenge. In response to mounting trade tensions and regulatory pressure from European governments, the company has committed to stronger privacy safeguards for EU-based customers and challenging US government data requests in court when necessary.
These actions follow the Dutch Parliament passing eight motions urging their government to abandon US-made technology for local alternatives where more robust tech ethics, privacy legislation and regulations such as GDPR make them a more ethical choice.
Microsoft President Brad Smith’s recent announcement of five digital commitments to Europe goes far beyond defensive positioning – rather, it is a strategic recognition that trust has become a competitive differentiator. The company plans to increase European datacentre capacity by 40% over the next two years, expanding operations across 16 European countries. This investment acknowledges that data sovereignty concerns are fundamental business requirements.
The Microsoft Cloud for Sovereignty package shows how compliance can become a product feature rather than a compliance burden. By offering greater control over data location, encryption and administrative access, Microsoft is turning regulatory requirements into competitive advantages. This shift represents a departure from the traditional Silicon Valley approach of asking for forgiveness rather than permission.
The compliance advantage
Given how fast change continues to unfold, regulatory compliance offers a genuine competitive edge. Companies that proactively embrace frameworks such as GDPR, ISO 27001 for information security management, and the emerging ISO 42001 for AI management systems can position themselves as trustworthy partners in an increasingly sceptical market.
Meanwhile, the shift towards local alternatives is a sign of growing demand for technology built on compliance, governance and trust. European businesses, in particular, are recognising that working with providers that understand and respect regulatory frameworks reduces their own compliance risk. This creates something of a virtuous cycle where compliance becomes a market differentiator rather than a cost centre.
For multinational companies, this regulatory landscape presents both challenges and opportunities. The emergence of different standards across jurisdictions – from the EU’s AI Act to developing frameworks in the UK, Canada and Australia – creates complexity.
However, organisations that adopt internationally recognised standards find themselves better positioned to navigate this patchwork of regulations. This means that even multinational companies operating in deregulated environments often maintain strict governance standards to access global markets, particularly in sectors where security-conscious enterprises enforce compliance throughout their supply chains.
Building credibility through governance
The message for technology providers is clear – trust is no longer automatically granted based on innovation alone.
Users, regulators and enterprise customers increasingly evaluate technology choices through the lens of governance, accountability and ethical considerations. Companies that fail to recognise this shift risk finding themselves excluded from markets where trust and compliance are prerequisites.
This transformation is particularly evident in artificial intelligence, where the stakes are highest. As AI systems become more pervasive and powerful, the potential for harm grows exponentially. Unresolved issues around bias, security vulnerabilities and intellectual property rights make governance frameworks essential for risk mitigation.
Therefore, organisations that implement robust AI governance frameworks are not simply protecting themselves from regulatory action, they are demonstrating to customers and partners that they take their responsibilities seriously.
And the cost of poor governance extends beyond regulatory fines. Reputational damage, customer churn and exclusion from enterprise contracts can far exceed the investment required for proper compliance frameworks.
Trust as competitive currency
Looking ahead, the technology sector must grapple with what is now becoming recognised as a reality - that trust has become the most valuable asset of all.
This transformation requires more than policy documents and privacy statements. It demands a genuine embedding of governance into product development, transparency in business practices, and a willingness to prioritise user rights over short-term commercial interests. Those that make this transition early will shape industry standards rather than simply complying with them.
Microsoft’s European commitments, whilst clearly motivated by commercial interests, point towards a future where compliance and trust drive innovation rather than constrain it. As governments worldwide develop more sophisticated regulatory frameworks, it is vital for enterprises to view governance as a proper foundation for long-term and sustainable growth.
We are working in a global market where trust must be earned through transparency and accountability, and the stakes are particularly high given the current geopolitical tensions around technology governance. With the US administration pushing for deregulation whilst Europe strengthens its regulatory framework, companies face the challenge of navigating divergent approaches to AI and data governance.
However, organisations that commit to high standards regardless of these political shifts are much more likely to position themselves to succeed across every market they operate in.
Sam Peters is Chief Product Officer at ISMS.online
Main image courtesy of iStockPhoto.com and KamiPhotos
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543