Making Big Cuts in Data Center Energy Use

The energy used by our nation’s servers and data centers is significant. In a 2007 report, the Environmental Protection Agency estimated that this sector consumed about 61 billion kilowatt-hours (kWh), accounting for 1.5 percent of total U.S. electricity consumption. While the 2006 energy use for servers and data centers was more than double the electricity consumed for this purpose in 2000, recent work by RMI Senior Fellow Jonathan Koomey, a researcher and consulting professor at Stanford University, found that this rapid growth slowed because of the economic recession. At the same time, the economic climate led data center owner/operators to focus on improving energy efficiency of their existing facilities.

So how much room for improvement is there within this sector? The National Snow and Ice Data Center in Boulder, Colorado, achieved a reduction of more than 90 percent in its energy use in a recent remodeling (case study below). More broadly, Koomey’s study indicates that typical data centers have a PUE (see sidebar) between 1.83 and 1.92. If all losses were eliminated, the PUE would be 1.0. Impossible to get close to that value, right? A survey following a 2011 conference of information infrastructure professionals asked, “…what data center efficiency level will be considered average over the next five years?”

More than 20 percent of the respondents expected average PUE to be within the 1.4 to 1.5 range, and 54 percent were optimistic that the efficiency of facilities would improve to realize PUE in the 1.2 to 1.3 range.

Further, consider this: Google’s average PUE for its data centers is only 1.14. Even more impressive, Google’s PUE calculations include transmission and distribution from the electric utility. Google has developed its own efficient server level construction, optimized power distribution, and utilized many strategies to drastically reduce cooling energy consumption, including a unique approach for cooling in a hot and humid climate using recycled water.

So where does the energy go in data centers?

For every unit of IT power produced, energy is used to cool and light the rooms that house the servers. Additionally, energy is lost due to inefficient power supplies, idling servers, unnecessary processes, and bloatware (pre-installed programs that aren’t needed or wanted). In fact, about 65 percent of the energy used in a data center or server room goes to space cooling and electrical (transformer, UPS, distribution, etc.) losses. Several efficiency strategies can reduce this.

For more information on best practices on designing low energy data centers, refer to this Best Practices Guide from the Federal Energy Management Program.

Reducing Cooling Loads

About half of the energy use in data centers goes to cooling and dehumidification, which poses huge opportunities for savings. First, focus on reducing the cooling loads in the space. After the load has been reduced through passive measures and smart design, select the most efficient and appropriate technologies to meet the remaining loads. Reducing loads is often the cheapest and most effective way to save energy; thus, we will focus on those strategies here.

Cooling loads in data centers can be reduced a number of ways: more efficient servers and power supplies, virtualization, and consolidation into hot and cold aisles. In its simplest form, hot aisle/cold aisle design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. In more sophisticated designs, a containment system (anything from plastic sheeting to commercial products with variable fans) can be used to isolate the aisles and prevent hot and cold air from mixing.

But one of the simplest ways to save energy in a data center is simply to raise the temperature. It’s a myth that data centers must be kept cold for optimum equipment performance. You can raise the cold aisle setpoint of a data center to 80°F or higher, significantly reducing energy use while still conforming with both the American Society of Heating, Refrigerating, and Air Conditioning Engineers’ (ASHRAE) recommendations and most IT equipment manufacturers’ specs. In 2004, ASHRAE Technical Committee 9.9 (TC 9.9) standardized temperate (68 to 77°F) and humidity guidelines for data centers. In 2008, TC 9.9 widened the temperature range (64.4 to 80.6°F), enabling an increasing number of locations throughout the world to operate with more hours of economizer usage.

For even more energy savings, refer to ASHRAE’s 2011 Thermal Guidelines for Data Processing Environments, which presents an even wider range of allowable temperatures within certain classes of server equipment.

Case Study: NSIDC Green Data Center

Just up the road from RMI’s office in Boulder, The National Snow and Ice Data Center is running around the clock to provide 120 terabytes of scientific data to researchers across the globe. Cooling the server room used to require over 300,000 kWh of energy per year, enough to power 34 homes. The data center was recently redesigned with all major equipment sourced within 20 miles of the site. The redesign resulted in a reduction of more than 90 percent in the energy used for cooling. The new Coolerado system, basically a superefficient indirect evaporative cooler that capitalizes on a patented heat and mass exchanger, uses only 2,560 kWh/year.

Before the engineers from RMH Group could use the Coolerado in lieu of compressor-based air conditioning, they had to drastically reduce the cooling loads. They accomplished this with the following strategies:

  • Less stringent temperature and humidity setpoints for the server room—this design meets the ASHRAE Allowable Class 1 Computing Environment setpoints (see Figure 2)
  • Airside economizers (enabled to run far more often within the expanded temperature ranges)
  • Virtualization of servers
  • Rearrangement and consolidation into hot and cold aisles

The remaining energy that is required for cooling and to power the server is offset with the energy produced from the onsite 50.4 kW solar PV system. In addition to producing clean energy onsite, the battery backup system provided added security in the case of a power outage.

Rick Osbaugh, the lead design engineer from RMH Group, cites three key enabling factors that allowed such huge energy savings:

  • A Neighborly Inspiration: The initial collaboration between NREL and NASA on utilizing a technology never used on a data center was the starting point of the design process. This collaboration was born from two neighbors living off the grid in Idaho Springs—but in this case, these neighbors also happened to be researchers at NREL and NASA.
  • Motivated Client: In this case, the client, as well as the entire NSIDC staff, wanted to set an example for the industry, and pushed the engineers to work out an aggressive low-energy solution. In order to minimize downtime, the staff members at the NSIDC all pitched in to help ensure that the entire retrofit was done in only 90 hours.
  • Taking Risks: Finally, the right team was assembled to implement a design that pushes the envelope. The owner and engineer were willing to assume risks associated with something never done before.
Case Study: Top 5 Search Engine Company

In 2011, Mortenson Construction completed an 85,000-square foot data center expansion for a top five search engine company in Washington state. This scalable, modular system supports a 6 MW critical IT load and has a PUE of only 1.08! This level of efficiency was possible because of a virtual design process that utilized extensive 3D modeling coupled with an innovative cooling strategy. Referred to as “computing coops,” the pre-engineered metal buildings incorporate many of the same free-air cooling concepts chicken coops utilize by bringing outside air through the sides of the building through the servers, and then exhausting hot air through the cupola, creating a chimney effect.

With a tight construction schedule (only eight months), the design team created an ultraefficient data center while also saving over $5 million compared to the original project budget.

A special thanks to Rick Osbaugh of the RMH Group, and Hansel Bradley of Mortenson Construction for contributing content for this article.