Skip to main content

The AI Dilemma: Google's Greenhouse Gas Emissions Surge

Google store

This month, Google released its latest environmental report, which showed its greenhouse gas emissions are up 48% since 2019. This rise is largely attributed to the company’s extensive use of artificial intelligence (AI). While Google has long championed sustainability, the compute-intensive nature of training and utilizing language models is presenting a significant environmental challenge. 

The question is: Is AI in its current form just too expensive from an environmental perspective? 

The silver lining for Google is that there is alignment between its business goals and its environmental goals. The company aims to achieve net-zero emissions by 2030. While Google acknowledges that AI won't make this easy, it has a financial incentive to try and make it work. The company currently spends 10 times more money on delivering a single search result using language models than it did before the AI boom. That massive cost increase certainly isn’t sustainable from a business perspective. To bring down its own costs, Google has no choice but to be part of the environmental solution.  

Companies that are building large language models (LLMs) for the sake of showing what they can do are less concerned with day-to-day operational costs. OpenAI, for example, is less concerned about product development than winning the Artificial General Intelligence (AGI) race. But OpenAI is not the consumer organization that Google is, and its leaders are more interested in demonstrating a set of ideas than building a sustainable product. 

When it comes to building real products with LLMs, smaller, more efficient models that can operate on less powerful hardware might be where the real wins are. 

One company that we can look to for efficiency is Apple, which is pioneering a different approach to AI by developing smaller language models that can run on smaller devices, including iPhones. These models are more task focused and do not require the massive computational power needed to train and run the larger models. Utilizing less raw computational power, they present a more sustainable path forward, demonstrating that it is possible to achieve technological advancements without exorbitant energy consumption. 

It is heartening to see attention being paid to the environmental impact of AI. The hope is that this kind of attention can also be applied to the broader ethical and societal challenges that we are currently facing with AI. It would be great if we worried as much about the human impact of these systems as we do when we think about the environmental impact. Issues of bias and fairness, the proliferation of deepfakes, and the erosion of human decision-making capabilities still need to be addressed. 

As we navigate this complex landscape, the hope is that continued vigilance and innovation will lead us toward a more sustainable and equitable future. 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top