John Frey of HPE Explains Sustainable Supercomputing

In our Sustainability by Design series, we’re profiling people who have used great design to solve the problem of waste in our environment. We’re asking a variety of designers how they minimize environmental impact through new thinking, new products, and new designs.

Imagine cooling a can of soda by putting it on your dining room table and turning down the air conditioning for the whole house to get it cold. It sounds like a ludicrous scenario, but according to John Frey, the sustainable-innovation technologist at Hewlett-Packard Enterprise (HPE), that’s essentially how we cool IT today. Entire rooms are chilled to control the temperatures in server equipment. And that becomes something of a Catch-22.

Solutions to the world’s most complex problems require the world’s most complex computing systems — but complex, high-performance computing is an energy-intensive proposition. This makes studying climate change and human impact on global resources a bit of an ironic proposition. To process the massive amounts of data to form accurate models, scientists have to consume significant amounts of energy — much of it to run those cooling systems.

HPE technologists seek to combat this energy-use problem by looking at supercomputing from a new angle. Instead of chilling entire rooms and using fans to move air around, they looked at the way cars cool their engines. By harnessing a water-based heat management system loosely related to the radiator of a car, they were able to cut the energy requirement for cooling supercomputers by a huge margin.

Fighting Fire with Water

When supercomputers run intense computations, the energy flowing through the processors generates a significant amount of heat. John says, “Often, you can’t run processors at full performance because you don’t have the ability to cool them effectively. And when you exceed a certain temperature, the processors start failing, so you start destroying the system because of your inability to shed heat.”

HPE created a system that’s able to adjust the water flow to increase cooling when it’s needed, and then scale back when the supercomputer is comparatively idle. In addition to increasing efficiency by delivering the cooling directly to the areas that need it, the ability to control the water flow means that system administrators can save significant amounts of energy during slow times.

The specific heat-carrying capacity of air is fairly low — this is why you don’t suffer third-degree burns from that blast of hot air when you open the door on a 400-degree oven. For that same reason, however, fighting heat with cold air requires massive amounts of very cold air. HPE’s system, on the other hand, absorbs the heat with water, meaning that much less coolant needs to circulate in the system — and that means smaller, more efficient systems rather than room-sized refrigerators.

“Water is 100 times more efficient at cooling than air is,” says John. “At the system level, you save about 28 percent of energy by switching to water. Additionally, water cools much more effectively, so processors can run at a much higher level of performance without the risk of overheating.”

The Power of the Sun

Having dealt with one of the energy consumption aspects of data centers, the HPE team could have patted themselves on the back and stopped. But instead, they decided to tackle another major energy need — the power to run the computer systems themselves — and use solar power to provide energy for the entire unit.

Modern processors are already highly energy efficient, thanks to extensive innovations related to mobile computing. The energy requirements for server farms and supercomputers are extremely high because of the number of processors all working simultaneously, not because of inefficiency in the individual processors. To address the energy consumption needs of a supercomputer, HPE had to think outside the box — in fact, outside the entire building.

John says, “Electricity from solar panels are in direct current (DC) and computing equipment also operates on DC. But what typically happens in data centers is you start with the power grid which is alternating current (AC) and then you go back and forth between AC and DC four to five times in a series of transformers before you actually run it inside to power a server. Every time you convert the power there is inefficiency.”

In connection with the Texas Advanced Computing Center (TACC), the New Energy and Industrial Technology Development Organization (NEDO), and NTT Facilities, the HPE team designed a supercomputer with a support network that all ran on a shared DC network. He says, “One of our design criteria was to start with high voltage, direct current and then leave the power there all the way to powering the supercomputer. So we eliminated four power conversions, which improved efficiency.”

The result was that the group could install a specialized, high-voltage solar system to supply DC power for the whole unit, creating a hyper-efficient supercomputer that runs on the power of the sun — something never done before.

Shaping a Sustainable Future

The HPE team is always looking for new ways to push the envelope and they are currently doing research with NREL on the use of hydrogen fuel cells. John’s firm belief is that “no company is going to be able to do this themselves; partnership is leadership.” And that starts by learning from one another. Small steps can make big differences in the effort to save resources and shape a sustainable future, so let’s start by looking for better ways to cool our soda cans.

You May Also Like: