Dive Brief:
- A server rack running on the latest generation of Nvidia chips “doesn’t need a chiller” to remain cool, CEO Jensen Huang said at the Consumer Electronics Show in Las Vegas on Monday.
- Shares of Johnson Controls, Carrier and other HVAC companies banking on robust data center demand fell at market open on Tuesday. Johnson Controls, which has recently shifted focus from residential and light commercial customers in favor of North American data center operators, was particularly hard hit.
- Huang said Nvidia was designing its Vera Rubin computing infrastructure for a “power-constrained world.” The public response to his remarks echoed market reaction to the debut of the highly efficient DeepSeek large-language model early last year, when shares of cooling and power providers sold off amid worries that future AI data centers would consume less energy than expected.
Dive Insight:
Market concerns around DeepSeek proved short-lived as investors, equity analysts and corporate leaders banked on the primacy of Jevons paradox: the notion that efficiency gains increase energy consumption over time.
It’s unclear whether the same logic applies to the Vera Rubin architecture, an assembly named for the 20th-century physicist credited with discovering dark matter. With the notable exception of Johnson Controls, HVAC companies with data center exposure largely recovered their initial losses in Wednesday and Thursday trading.
At CES, Huang said Vera Rubin consumes twice as much power overall as its predecessor, Grace Blackwell, while delivering five times the peak inference performance and 3.5 times the peak training performance.
Huang described NVIDIA’s latest stack as a “radical, extreme” design that uses significantly less power to generate each token, the unit of AI compute. With no cables or water pipes, the chassis supporting the system took five minutes to assemble, he said. Its predecessor, Grace Blackwell, had 43 cables, six water pipes and took two hours to assemble.
The company’s Grace Blackwell architecture was named for David Blackwell, a game theory pioneer. Grace refers to a version of the company’s CPU technology.
Despite higher overall power use and a denser architecture, Huang said, the new system had similar airflow rates and the same water temperature — 45 degrees Celsius or 113 degrees Fahrenheit — as Grace Blackwell.
“At 45 degrees Celsius, the data center doesn't need a chiller,” he said. “We essentially used hot water to cool this supercomputer, with incredibly high efficiency.”
Huang also said the Vera Rubin rack would be “100% liquid-cooled.” That remark confirmed what data center cooling experts have told Facilities Dive over the past few years — that next-generation server racks would soon throw off too much heat to cool with forced air alone.
One of those experts, LiquidStack CEO Joe Capes, said in an email that Vera Rubin would likely accelerate demand for cooling solutions developed by his company and its competitors.
“Vera Rubin is a clear signal that the industry has crossed a threshold where liquid cooling is purposely integrated with the newest market-leading chips supporting AI deployments,” Capes said. “More efficient chips enable more compute per rack, which concentrates heat and raises the importance of liquid cooling.”
Capes added that chillers could still play a role in data halls running exclusively on Vera Rubin architecture due to the high temperatures of their secondary water loops. High water loop temperatures may expand opportunities to use dry coolers for free cooling, or rejecting heat into cooler outdoor environments rather than using more energy-intensive mechanical cooling systems, he said.
Capes added that more free cooling could also reduce the need for evaporative cooling, an efficient but resource-intensive process that has raised concerns about the sustainability of data centers’ water usage. That was one of several concerns raised by neighbors of a proposed 429-acre data campus near Shelbyville, Indiana, that local planning officials voted to reject Wednesday.