New frameworks around transparency, bias and security will continue to emerge, creating complexity for global businesses, explains João Freitas at PagerDuty

Government regulators across the globe have typically lagged behind technology innovation. AI has only supercharged this due to the staggering progress made since Google, OpenAI and others helped bring the new age of LLMs into popularity.
As we move through 2026, governments will increasingly introduce new regulatory AI frameworks that will address the important elements of model transparency, bias, fairness and security, pushing organisations to adapt their operations or risk incurring penalties. However, governments are also interested in boosting the pace of innovation, and too much regulation in a fast-evolving area may slow that down.
The pace and direction of regulation will vary between regions, and there are marked differences that may fragment global innovation and cause measurable drag. In the EU, there are proposals to slow down elements of the AI Act to allow time for implementation without hampering innovation. On the other hand, the US and China have taken a more permissive approach with regulatory frameworks that encourage competition and speed from the outset, sparking a new AI arms race.
The uneven regulatory landscape makes AI innovation even more complex for cross-border organisations. As regulation continues to evolve in all regions, organisations need to be flexible and balance their need to innovate with the need to maintain compliance. Protecting data, while also making it accessible for AI to work with, becomes a challenge for operational control and ongoing compliance.
For digital operations leaders at the enterprise level, this means regulatory strategy can no longer sit solely within legal teams. AI governance must become an operational discipline. Engineering, security and compliance functions will need shared ownership of how models are selected, used, trained, deployed and monitored.
Rather than building separate systems, leading organisations will embed configurable governance controls - logging, traceability and human oversight thresholds - directly into their AI infrastructure. This will allow them to adapt to local requirements without redesigning core workflows.
Over time, we will see a shift from reactive compliance to regulatory resilience. Instead of responding to each new framework as it emerges, organisations will design AI systems with transparency and accountability built in by default. Clear documentation of model provenance, defined escalation paths for AI decisions and robust testing for bias and performance will become standard practice to enable the right response and the right proof of compliance, at the right time.
Increasingly pragmatic regulatory environments do not necessarily mean lighter oversight. It likely means more targeted scrutiny in high-risk use cases, combined with greater freedom to experiment within lower-risk domains. Organisations that can clearly demonstrate risk assessment and governance maturity will find regulators more willing to support innovation.
In this environment, competitive advantage will certainly favour those who treat compliance as part of product and operational design, rather than a final checkpoint before deployment. The organisations that succeed from 2026 into 2027 will be the ones that can innovate confidently across multiple jurisdictions without friction.
The future of AI regulation is going to remain uneven in the geopolitical landscape, but for digital operations teams, the immediate priority is clear. Build systems that can flex with regulation, rather than betting on any single regulatory direction.
João Freitas is GM & VP of Engineering, AI & Automation at PagerDuty
Main image courtesy of iStockPhoto.com and Just_Super

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543