Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

GenAI: Competitive advantage versus environmental cost 

Published 6 December 2024 in Artificial Intelligence • 5 min read

CEOs now expect GenAI to transform their businesses. IMD’s Tomoko Yokoi asks how adoption of the technology could affect carbon reduction and climate goals. 

Amid all the hype around generative artificial intelligence (GenAI), a major downside is usually overlooked: the environment. Yet the environmental impact of GenAI is so great that it threatens to undermine the fight against climate change and derail progress toward decarbonization targets.

The problem lies in the amount of energy required to power data centers and related infrastructure on which GenAI depends. One recent study found that the amount of energy required to train just a single AI model generates a remarkable estimated 285,000 kg of CO2. To put that number into context, it suggests getting one GenAI model up and running for a business will generate almost five times the carbon emissions produced by the average US car over its entire lifetime.

Emissions aren’t the only worry when it comes to GenAI, either. The cooling requirements of data centers are a major issue in the context of water scarcity. 

Big Tech under pressure

For the world’s big technology companies, this looks increasingly problematic. The energy demands of GenAI and data centers are threatening their ability to hit their own carbon-reduction targets. 

Google, for example, had promised to cut its emissions by 50% from a 2019 baseline, but its latest annual sustainability report, published in July, discloses that between 2022 and 2023 its overall emissions grew by 13.5%. Microsoft, which had promised to be carbon-negative by 2030, saw its total emissions increase by close to a third between 2020 and 2023. 

Emissions aren’t the only worry when it comes to GenAI, either. The cooling requirements of data centers are a major issue in the context of water scarcity. 

Research published in the science journal Nature warns that the tech sector has dramatically increased its water consumption in recent years. The report predicts AI demand will drive up these companies’ annual water withdrawal from the ground or surface sources to as much as 6.6 billion cubic meters by 2027. That’s about half the total annual water consumed by the UK’s population. 

In the context of their responsibility to society, every organization pinning their hopes of future growth on GenAI tools should find such numbers alarming. Sustainability professionals are growing increasingly worried about organizations’ conflicting sustainability and technological goals. A survey carried out by software company Salesforce found that 81% now believe that shrinking AI’s carbon footprint is vital. 

Other positive developments include advances in measurement tools to secure a definitive view of the carbon footprint of GenAI

Making (some) progress

In some quarters, efforts are already underway. Amazon Web Services (AWS) has begun looking at emission-free nuclear energy (at least from an operational standpoint) as a source of power for its data centers, encouraged by the rapid development of small modular reactors that can be installed locally. Google has bought a substantial stake in a Taiwanese solar power producer. It also claims its data centers are significantly more energy-efficient than those of its rivals. 

More refined software algorithms will also reduce the energy requirements of large language models (LLMs). More energy-efficient hardware would also help lower usage. Promoting energy efficiency in data centers and a general shift to renewable power sources – and perhaps a reconsideration of nuclear energy – will be pivotal. It is also possible to site data centers more smartly, ensuring they’re closer to access points for cleaner energy, for example.  

In addition, it’s worth pointing out that many believe GenAI has a significant role to play in powering the calculations and analysis needed to deliver complex decarbonization solutions. For example, Google has developed an AI-powered tool that helps airlines reduce planes’ contrails, which trap large amounts of heat within the Earth’s atmosphere. Trials of the tool have seen a 54% reduction in contrail pollution. 

Other positive developments include advances in measurement tools to secure a definitive view of the carbon footprint of GenAI. The Green Software Foundation, for example, has developed the Software Carbon Intensity specification. The Green Algorithms calculator makes it possible to estimate the carbon footprints of specific projects, Including AI initiatives and high-performance computing. Stanford University researchers have published a framework for more reliable, straightforward, and accurate reporting of energy usage in machine learning (ML) systems. 

Big Tech is also innovating in this area. AWS, Google, and Microsoft Azure have all produced carbon-accounting tools specific to their products and services. These at least provide standardized measures for different projects and initiatives that run on the same cloud service. 

More work to do

Nevertheless, stakeholder pressure on big tech and its customers will undoubtedly increase. Already, groups such as the Climate Action Against Disinformation coalition are issuing vocal warnings about the impact of AI. The campaign group Greenpeace has endorsed attempts to introduce new regulations around the assessment of AI’s environmental footprint.

The read-across for corporations and other organizations is disconcerting. CEOs who have promised to bring down emissions – and even to hit net zero in relatively short timeframes – may find GenAI use compromises their good intentions.

That creates a dilemma: should CEOs pass up the opportunity to exploit a technology they are consistently told offers them a route to competitive advantage or moderate their ambitions to decarbonize at speed?

In truth, neither strategy works. Instead, CEOs must deal with these issues head-on. Step one is to acknowledge the problem – to overtly recognize the potential impact of GenAI on their carbon footprints. Step two is to do something about it. And the good news is that there are possible mitigations.

One imperative is for large corporations to make more use of foundational LLMs rather than building their own models from scratch; this will require far less data, and therefore far less energy, reducing emissions accordingly. Equally, not every LLM is better because it is bigger – smaller models trained on carefully curated data may deliver better results while operating far more energy efficiently.

Location matters too. Edge computing enables businesses to lower their energy use by processing data in more places – perhaps close to the business, reducing travel time, or in a data center that has access to renewable energy. Similarly, having the right infrastructure can make a significant difference – AI models run on processors specifically developed for the purpose will use much less energy.

CEOs will rarely be best equipped to oversee these interventions. But they should be ready to challenge the CTO and the operations team on what they are doing in these areas – and the results they’re achieving. Investment in GenAI may now be non-negotiable for many businesses, but it must not come with a blank cheque for emissions.

Authors

Tomoko Yokoi

Tomoko Yokoi

Researcher, TONOMUS Global Center for Digital and AI Transformation

Tomoko Yokoi is an IMD researcher and senior business executive with expertise in digital business transformations, women in tech, and digital innovation. With 20 years of experience in B2B and B2C industries, her insights are regularly published in outlets such as Forbes and MIT Sloan Management Review.

Related

X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience