Towards lower emissions
Still, measuring greenhouse gas emissions is only the first hurdle to clear. Industry leaders will also want to take practical but definite steps towards mitigating their climate change impacts â and be seen to be doing so. That will require them to address AI-related emissions at each stage of the lifecycle of every project, from development and training through to deployment and usage.
There are multiple opportunities to explore. By some estimates, switching to renewable energy grids for the training of LLMs could reduce the emissions at this stage of an AI initiative by a factor of 40. There is also a need for more efficient GPUs, as well as the possibility of buying carbon credits for emissions that canât easily be avoided.
There is also scope to reduce the amount of energy that a given AI project actually requires. Computationally efficient algorithms will produce dividends, for example. It should become standard practice for developers to focus on training time and other relevant parameters as they assess the performance of new initiatives.
For maximum effect, it will be necessary to combine multiple strategies. A recent paper from Google and the University of California, Berkeley, recommends an approach centred on the âfour Msâ â model, machine, mechanisation and mapping. It advocates for more refined software algorithms, more energy efficient computer hardware, greater energy efficiency in the data centre too, and better location of data centres for access to cleaner energy.
There is a role for policymakers here too â with approaches that focus on both carrot and stick. The former might include more use of tax incentives for the development of data centres in locations with good access to hydro and solar power. The latter might include tougher regulation on disclosure â airlines required to make data available that enables the analysis of emissions from individual flights, for example, may wonder why the technology sector is able to hide behind aggregated numbers.
Get all of this right and it is possible to make a significant difference. The Google-UC Berkeley paper points to the launch of the Generalist Language Model system launched a few months after the higher-profile debut of GPT-3. By optimizing the four Ms, the GlaM project reduced carbon emissions by a factor of 14. The same paper suggests improvements each of the four Ms over four years to 2021 meant the average data centre opened that year would have consumed so much less energy that it would have produced 747 times fewer carbon emissions than the average facility in 2017.
Will this be sufficient to mitigate surging demand for GenAI applications â and to counter the growing criticism of the climate change impacts of this rapidly evolving technology? It represents a beginning. Responsible GenAI businesses will build on these foundations as new opportunities for mitigation emerge â but they must make a start now.