Google Defends Gemini AI’s Efficiency, Claims Minimal Power and Water Use Per Prompt

Google Defends Gemini AI’s Efficiency, Claims Minimal Power and Water Use Per Prompt
X

Google claims Gemini AI uses only a few drops of water and minimal electricity per query, but experts remain unconvinced.

Google is making bold claims about the efficiency of its Gemini AI assistant, positioning it as one of the most sustainable large-scale AI systems in operation today. According to a new company study, every single Gemini prompt consumes just 0.26 millilitres of water—equivalent to about five drops—and the same amount of electricity as running a television set for less than nine seconds.

The company further estimates this level of consumption produces only 0.03 grams of carbon dioxide emissions per query. Google says these numbers are drastically lower than earlier research on generative AI, which suggested that large-scale data centres required significantly higher water and electricity resources.

Highlighting its advances, Google claims to have achieved a 33-fold improvement in energy efficiency per prompt, alongside a 44-fold reduction in carbon footprint over the past year. These figures, it argues, demonstrate that AI can evolve into a far more sustainable technology than critics suggest.

However, independent experts are urging caution before celebrating these numbers. The Verge reports that researchers have raised concerns about Google’s methodology, pointing to gaps that may downplay the true environmental costs. Shaolei Ren, associate professor at the University of California, Riverside, whose work Google cites in its paper, argues that the study "hides the critical information" and "spreads the wrong message to the world."

Ren points out that Google only considers the water used directly for cooling servers. What it leaves out, he explains, is the much larger amount of water needed to generate the electricity powering those servers. Power plants, whether fossil-fuel based or nuclear, rely heavily on water for cooling systems and steam production. With AI-driven electricity demand surging, this indirect water usage represents a critical—yet largely invisible—part of the environmental equation.

"You only see the tip of the iceberg," said Alex de Vries-Gao, founder of Digiconomist and a researcher at Vrije Universiteit Amsterdam, stressing that indirect usage must be included for a realistic picture.

The study’s treatment of emissions data has also drawn scrutiny. Google reportedly uses a "market-based" measure that factors in its renewable energy purchases and commitments. Critics argue this can paint an overly optimistic picture, since it overlooks "location-based" emissions—the actual energy mix in the regional grids where Google’s data centres operate. Researchers note that location-based data often reflects a much higher carbon footprint.

Another contentious point is Google’s reliance on a "median prompt" for its calculations. The company says this approach avoids distortion from unusually large or complex prompts. But academics argue that averages, not medians, are more consistent for comparison, and without transparency around prompt length or token counts, Google’s numbers remain difficult to benchmark against prior studies.

Previous estimates suggested a single AI prompt could consume as much as 50 millilitres of water. Google’s far lower figure of 0.26ml appears striking, but Ren counters that his work accounted for both direct and indirect consumption, whereas Google’s study does not.

The company has not yet submitted its study for peer review but says it remains open to doing so. In a blog post, Amin Vahdat, VP of AI & Infrastructure at Google Cloud, and Jeff Dean, chief scientist of Google DeepMind, wrote: "While we’re proud of the innovation behind our efficiency gains so far, we’re committed to continuing substantial improvements in the years ahead."

Next Story
    Share it