Zoë Webster at Authentic Innovation argues that businesses cannot make sustainable AI decisions without transparency

Artificial intelligence is moving quickly, but the conversation about its environmental impact is only just beginning.
The number of UK data centres is expected to rise by nearly 20%, from around 475 to almost 575, with most of that growth likely to happen over the next five years. That gives a sense of the pace at which the underlying infrastructure is scaling.
At the same time, policymakers are starting to ask more direct questions about the energy demands behind this expansion and what it could mean for national electricity consumption and emissions. Recent warnings from the UK’s energy regulator suggest that proposed data centre projects could require more power than the country’s current peak demand.
Those questions matter for organisations that are increasingly relying on AI in their day-to-day operations. And these organisations may, at some point, need to consider a delicate balancing act between increasing AI usage and their sustainability commitments.
The reality is that even today, many businesses have very little visibility into the environmental footprint of the systems they are using. That makes it difficult to determine the trade-offs involved or to make informed decisions about how these technologies are deployed.
The infrastructure behind the interface
For many organisations, AI can appear relatively easy to integrate at first. A team might start experimenting with a generative AI tool or adopt an application that now includes AI features.
From a user perspective, the experience can feel almost frictionless, and many don’t necessarily consider the infrastructure behind it. Training and running large models requires vast computing power, specialised hardware and large-scale data centre capacity. That infrastructure consumes significant amounts of electricity and other resources includingwater for cooling. It also extends beyond day-to-day operation, including the resources required to manufacture the hardware itself and to generate energy throughout the lifecycle.
Taken together, the environmental impact spans multiple layers, from manufacturing, energy generation and cooling through model training and testing, and applying the models to new data.
What is striking is how little of that reality is visible to the organisations actually using these systems.
In many cases, businesses have no clear data on the energy requirements associated with the models they are deploying, how resource-intensive they are to run, or how those demands change at scale.
For companies trying to align technology investments with sustainability commitments, that lack of visibility can make it difficult to assess trade-offs or ensure that AI adoption is aligned with wider environmental goals.
A growing internal conversation
In many organisations, questions about the environmental impact of AI are not necessarily coming from technical teams. More often, they are being raised by people outside of engineering roles, prompted by what they are seeing and hearing in the media.
The challenge is increasingly being surfaced more broadly, as organisations begin to question how these choices align with wider environmental goals.
Interestingly, these conversations are not always being driven from the top. There is often a degree of growing scepticism elsewhere in the organisation, while executive teams and boards remain focused on the opportunity side of the equation.
At that level, the emphasis is typically on productivity gains, innovation and competitive advantage. Businesses are under pressure to explore how AI can create value. But there is a need to strike a more deliberate balance between opportunity and risk, including the less visible costs associated with scaling these systems.
As adoption grows, environmental impact is starting to enter the conversation more often. Teams want to understand not only whether a system works, but what it requires in terms of resources to operate, and how that aligns with wider organisational commitments.
A supply chain with limited transparency
Part of the difficulty is that most organisations are several steps removed from the underlying infrastructure. That separation is built into the way most businesses access AI today.
Few companies are training large models themselves. Instead, they rely on a stack of cloud providers, model developers and software vendors who integrate AI capabilities into their products.
Each layer adds convenience, but that also means it also adds distance between the user and the resources powering the system.
This means that even organisations with strong ESG commitments can struggle to assess the environmental impact of their AI usage. There is currently no widely adopted way to compare models based on energy consumption, compute requirements or operational efficiency.
If you are choosing between two AI tools today, there are numerous performance metrics to select from; however, environmental metrics are way harder to access.
Bigger is not always better
A related challenge is the assumption, often driven by current market narratives, that the most powerful models are always the most appropriate solution. Large generative models have captured public attention and, in many cases, they are genuinely impressive. But they are not the only form of AI available, nor are they always the most suitable for enterprise use cases.
The most successful deployments are rarely about using the biggest possible model. They are about understanding the problem clearly, working with the right data and choosing an approach that fits the task.
In many business contexts, smaller and more specialised models can perform extremely effectively when they are properly integrated into existing processes. They may also require far less computational power.
The current narrative around AI sometimes overlooks this. Scale is often equated with progress, when in practice, efficiency and appropriateness can matter just as much, if not more.
Why transparency matters
AI has massive potential across industries, from improving productivity to enabling entirely new services. But responsible adoption depends on understanding the trade-offs involved.
If organisations are expected to manage their environmental impact, they need the information required to make informed choices. That includes clearer visibility into the energy consumption and resource requirements associated with the AI systems they use.
Without increased openness from those building and supplying AI systems, it remains difficult for businesses to incorporate sustainability considerations into their technology strategies. It would also encourage healthy competition around efficiency, not only performance.
That early stage presents an opportunity
If transparency around environmental impact becomes a standard expectation now, it can help guide how the technology evolves as adoption accelerates. Efficiency and sustainability could become part of the design criteria for AI systems, rather than an afterthought.
Without that visibility, organisations risk scaling technologies whose resource implications they never fully understood.
AI will undoubtedly play a central role in the future of business. But as we continue to develop and deploy these systems, understanding the infrastructure behind them will be just as important as understanding the algorithms themselves.
Only with that level of transparency can organisations make truly responsible choices about how they use AI and also stay on track to meet their sustainability targets.
Zoë Webster is CEO at Authentic Innovation and Vice Chair of Judges for the National AI Awards
Main image courtesy of iStockPhoto.com and Just_Super

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543