Best practices for energy-efficient data centers identified through case studies and demonstration projects

Best practices for energy-efficient data centers identified through case studies and demonstration projects

William Tschudi

TRACT

Energy benchmarking is useful in comparing performance of data center facilities, and can be a powerful tool to help identify why certain energy-intensive systems perform better than others. Through studies of over 22 data centers, when analyzing how the better-performing systems achieved better efficiency, a number of “best practices” were evident. Five of the best practices that can have a large impact on overall energy efficiency in a data center are discussed in detail, using benchmark comparisons and examples from case study reports and utility demonstration projects.

Data center infrastructure is characterized by specialized HVAC and electrical distribution systems, which often include redundancy for reliability. How systems are sized, designed, and operated can have a large impact on capital and operating costs. Five of the best practices observed in operating data centers and demonstrated as pilot projects can provide guidance for future new or retrofit design.

INTRODUCTION

As projections for IT equipment heat densities continue to rise, there is renewed interest in finding solutions to minimize data center electrical requirements. Energy benchmarking in data centers is very useful for understanding the operation and energy requirements for the center as whole and for individual systems and components that make up the center. High-level information obtained through benchmarking can be helpful in many ways, such as ensuring adequate infrastructure and reliability, planning for future growth, projecting utility costs, and negotiating utility contracts. Then, by “peeling the onion” and examining the end uses within the data center, additional information on the individual system’s energy performance is revealed. And finally, through examination within the systems, the energy performance of key components can be evaluated.

By comparing energy performance at the end-use level, an interesting picture emerges. All systems and components are not created equal. Large variation in energy use and relative efficiency is evident when infrastructure systems and components are compared to comparable systems in other data centers. Similarly, energy efficiency of the IT equipment itself varies considerably while performing similar computing work. By examining the better-performing systems and components, certain designs and operating strategies, or “best practices,” that lead to more energy-efficient operation become evident. A review of over 22 data centers’ energy performance helped to identify over a dozen best practices. This paper discusses five of these that contributed to better overall energy efficiency in the data centers that were studied.

AIR MANAGEMENT

Air cooling of electronic (IT) equipment in data centers has been the standard for decades, yet it has taken the rising energy intensities of the past few years to expose the difficulties in providing optimal amounts of cool air. More and more centers are finding that their ability to cool energy-intensive racks of tightly spaced servers is being challenged. In the past, often the solutions to overheating problems involved lowering the average supply air temperature–effectively overcooling the entire space when only a local area was exhibiting problems–or adding additional computer room air conditioners with the ability to move more air and provide additional cooling. Frequently, the problem of localized overheating was not solved by these measures. Air cooling today is evolving through the efforts to try to provide adequate cooling to the IT equipment. Energy benchmarking and case studies in data centers (Figure 1) have illustrated that the effectiveness of HVAC systems (of which air delivery is a major component) varies significantly based on several factors, including whether the air is optimally cooled, optimally delivered to the inlet of the IT equipment, and optimally returned to the computer room air conditioner.

In data centers with lower HVAC power consumption, many of the pitfalls in air delivery and return were avoided. The better-performing systems had designed-in or modified configurations such that needed volumes of air were delivered to the IT equipment and then returned to the computer room air conditioners. A number of strategies were at play, including:

* Elimination of air leakage from unwanted areas of raised floors or plenums

* Separation of hot and cold aisles through use of barriers and blanking plates

[FIGURE 1 OMITTED]

[FIGURE 2 OMITTED]

* Avoidance of underfloor blockages in raised floor designs

* Careful placement of floor tiles in underfloor systems or diffusers in overhead systems to avoid short-circuiting back to the computer room air conditioners

* Deployment of rack systems that do not impede airflow yet isolate hot and cold aisles

* Optimization of airflow through use of airflow modeling programs

* Collection of hot air through plenums or high ceilings and directly returning it to the computer room air conditioners

Case studies, demonstration projects, and research in airflow optimization have all helped to focus the data center community on the need to carefully consider how air is supplied in data centers, especially when dealing with very high power densities. A utility demonstration project (PG & E 2006) demonstrated the cooling benefit and calculated energy savings made possible by totally enclosing the “cold” aisle in a data center (Figures 2 and 3). The demonstration achieved the desired isolation by use of inexpensive materials; however, commercially available blanking panels, doors, etc., are becoming available for this purpose. In this demonstration, approximately 16% to 26% energy savings were estimated simply by enclosing the cold aisle. Other computational fluid dynamics (CFD) analyses and testing have confirmed that there are significant benefits to completely separating hot and cold airstreams (Tschudi and Liesman 2003).

Frequently, when raised floors are used for air delivery, there are areas where congestion occurs if there is not adequate floor height due to the need to route network and power cabling and piping for chilled water or fire protection. Figure 4 illustrates a fairly typical situation where airflow is substantially blocked with all of the underfloor cabling and piping. Better performance was observed when deeper floors were used and placement of all equipment under floor was well coordinated. Problems can occur with network cabling restricting airflow into IT equipment with either underfloor or overhead air distribution systems (Figures 4 and 5).

[FIGURE 3 OMITTED]

[FIGURE 4 OMITTED]

AIR ECONOMIZERS

Air economizers have a long history of successful deployment in commercial buildings, yet their use in data center facilities is not as prevalent. Case studies of centers that successfully use outside air economizers consistently illustrated that they were more energy-efficient than their counterparts that did not use economizers. Because data centers typically operate continuously, there are many climates where there are a significant number of hours per year when outside conditions are sufficient to provide for part or all of the cooling. When outside air is 55[degrees]F or less, chillers (compressors) may not be required to run at all. Depending on climate, some level of humidification may be required– or lock-out of the economizer if excessive humidification would be required.

However, this strategy is somewhat more controversial. Data center professionals are split in the perception of risk when using this strategy. Some centers routinely use outside air without apparent complications, but others are concerned about contamination and environmental control for the IT equipment in the room. Those who endorse using outside air economizers often point to reliability improvement in addition to energy savings by eliminating or reducing potential points of failure (e.g., pumps, chillers, etc.), while having the “closed” air cooling available as a backup when using the air economizer. Some code jurisdictions have mandated use of economizers in data centers.

Several data centers using outside air for cooling were included in energy benchmarking studies and, as expected, achieved higher efficiency than centers with closed systems. The physical arrangements of the centers using air economizers were each unique layouts. Some used traditional raised floor distribution; others distributed air from overhead without raised floors. Some air systems combined traditional computer room air conditioners with “house” systems that had the ability to provide more outside air than is typical in closed data centers. These centers were typically housed in commercial buildings.

Conventional data center design paradigms were obviously challenged in order to accommodate air economizers. Adequate access to the outside had to be provided in the architectural design if outside air was to be used for cooling. Large central air-handling units with roof intakes or sidewall louvers were most commonly used, although some internally located computer room air-conditioning (CRAC) units now offer economizer capabilities when provided with appropriate ducting to the outside. The use of large air handlers offered other benefits in that they were typically more efficient than smaller computer room air conditioners and had the ability to modulate air flow through the use of variable-speed drives.

[FIGURE 5 OMITTED]

Control strategies to deal with temperature and humidity fluctuations were considered along with adequate filtration to control particulate contamination. Low-pressure-drop filter design was provided to avoid the potential penalty of additional fan energy use. Each of the centers using outside air reported that control of their environmental conditions was not a problem and that IT equipment was not adversely affected.

CENTRALIZED AIR HANDLING

Better performance was observed in data center air systems that used custom-designed central air-handler systems (Figure 6). Centralized systems exhibited advantages over the traditional multiple distributed-unit systems found in other centers. The centralized systems observed during energy benchmarking used larger motors and fans, which themselves were generally more efficient than traditional computer room systems. They were also able to take advantage of variable-volume operation by using variable-frequency drives. Reduction in airflow and resulting fan-power reduction was possible because the data centers were designed for much larger loads than they were experiencing.

The centralized air-handling systems improved efficiency by taking advantage of surplus and redundant capacity. For example, operating three 30,000 cfm air handlers to provide 60,000 total cfm required about half the total power of operating just two air handlers for the same total of 60,000 cfm, and also improved reliability. The fan law relationship is not exact, but it is an approximate predictor of actual performance. The centralized systems were also able to distribute needed volumes of air efficiently through variable airflow boxes or damper controls. Both overhead and underfloor distribution systems were used with the central air handlers in the benchmarked centers. The most efficient HVAC system observed used an overhead distribution system with no raised floor; however, even the centers with underfloor distribution along with large central air handlers were more efficient.

[FIGURE 6 OMITTED]

Ductwork and plenums associated with the centralized systems were typically oversized for current conditions, being sized for full design load conditions, which were far from the measured loads during the benchmarking. In addition, some systems were designed to achieve a low pressure drop (compared to standard building systems) under full-load conditions. These factors contributed to energy-efficient operation, because the pressure drop (resistance to airflow) was extremely low for the current loading conditions.

The maintenance-saving benefits of central systems are well known. By placing the central air handlers and the maintenance function outside the data center space, the benchmarked centers were able to improve the space available for IT equipment and its associated maintenance. Because central units were controlled centrally, they were not “fighting” one another to maintain humidity control (i.e., eliminating simultaneous humidification and dehumidification as was often the case in benchmarked centers with distributed units with independent and uncoordinated controls). Another reason that systems using central air handlers were more efficient was that the cooling source was typically a water-cooled chiller plant, more efficient than the cooling source for other benchmarked systems.

FREE COOLING USING WATER-SIDE ECONOMIZERS

As with air economizers, in many climates there are a significant number of hours per year where part or all of the cooling can be provided without using compressors. Bin analysis using local weather data is required to assess the benefits of economizers, but free cooling generally is best suited for climates that have wet-bulb temperatures lower than 55[degrees]F for 3000 or more hours per year. Water-side economizers can be very effective in chilled-water systems designed for 50[degrees]F and above. Several of the better-performing data centers observed during benchmarking were taking advantage of free cooling. Figure 7 shows that there are approximately 3200 hours-over a third of the year–when free cooling can be provided in San Jose, California.

When operating with free cooling, energy consumption for the chilled-water plant can be reduced by up to 75% while improving reliability. The centers using free cooling also received other benefits. While operating with free cooling, the chillers were available as a redundant cooling source; because they were not required to operate continuously, the centers’ maintenance costs were lower.

Figure 8 shows the plate and frame heat exchanger used in one of the benchmarked centers. Heat exchangers are typically used to isolate the chilled-water loop from the open tower condenser water to prevent fouling of coils. Figure 9 is a schematic diagram showing the heat exchanger configuration in one of the benchmarked centers. This center took advantage of its mild climate to reduce HVAC electrical loads by reduced chiller operation.

Using medium-temperature chilled water in the range of 50[degrees]F or higher can maximize the potential savings from a free cooling system. It is likely that energy-efficient centers have chilled-water systems in this range to avoid dehumidification/rehumidification problems.

EFFICIENT UNINTERRUPTIBLE POWER SUPPLIES

Most data centers rely on uninterruptible power supplies (UPSs) to provide backup to ensure reliability. A variety of systems are offered, including those based on battery banks, fuel cells, rotary machines, and other technologies. All of the electrical power to the IT equipment in a data center is typically flowing through one of these devices. Some of the power is lost through device inefficiencies (e.g., conversion losses). When benchmarking these systems, it became apparent that the percentage of power lost in the UPS was much greater when they were lightly loaded. Figure 10 illustrates the efficiency drop-off at lower load factors from measurements taken during benchmarking.

In one center, benchmarking discovered that approximately 50% of the power to the IT equipment was lost in the UPS system, which was oversized and operating at a very low load factor. Of course the HVAC system was tasked to remove all of the heat, so it is obvious that the HVAC effectiveness in this case was very poor. Once these data were presented to the data center operators, a decision to replace the UPS with a correctly sized unit was easily justified.

[FIGURE 7 OMITTED]

As a result of the initial benchmark findings, a study of the relative efficiency of UPS systems was undertaken. This study confirmed that the efficiency curves drop significantly at low load factors, but it also highlighted that there is a wide range of efficiency among the various UPS systems offered for data center applications. Figure 11 shows the measured performance of the tested UPS systems.

By selecting a more efficient UPS system, several percentage points improvement in efficiency is possible and is compounded by the savings in HVAC. Because redundancy for reliability is a key feature of most data centers, individual UPS systems are often loaded at less than 50% of the rated load, which is where the efficiency drops off. Different redundancy configurations (e.g., N + 1, 2N, etc.) result in different percentage load factors on the UPS systems. For example, in Figure 12, if a 600 kW design load is backed up in a 2N approach and its real operating load is 50% of its design load, then each UPS operates at a load factor of only 25% (150 kW). However, for the same total equipment load in the configuration on the right, each UPS would need to operate at 33% (100 kW). An efficiency gain of approximately 5% would be realized just from operating a UPS at 33% versus 25% of full load. Both configurations maintain the same level of redundancy. Additional HVAC savings would also be realized.

Energy benchmarking of UPS systems led to a realization of several key points:

1. Accurate IT equipment load determination can allow downsizing UPS systems and loading them so they are in a more efficient operating range.

[FIGURE 8 OMITTED]

2. Selection of a UPS system should be based in part on the efficiency in the load range in which it is expected to operate.

3. Life-cycle cost evaluations (total cost of ownership) can easily justify selection of more efficient UPS systems.

[FIGURE 9 OMITTED]

[FIGURE 10 OMITTED]

[FIGURE 11 OMITTED]

4. Redundancy configurations greatly affect energy efficiency.

CONCLUSIONS

Energy benchmarking provides a wealth of information to a data center operator. Uses for the benchmarks include establishing a baseline and tracking performance over time, identifying maintenance problems, comparing performance, and setting operational goals; however, they can also help identify strategies that lead to more efficient performance. The five strategies described in this paper were identified by examining how better-performing data centers achieved their performance. Many of the centers benchmarked used some or all of these strategies. In addition, a number of other areas were noted and design guides were developed based upon the best observed practices.

Because data centers operate continuously, small improvements in efficiency can translate into large annual savings. Efficient operation may also allow for reductions in equipment sizing or allow for future growth.

ACKNOWLEDGMENTS

Special thanks are extended to the California Institute for Energy Efficiency (CIEE), the California Energy Commission Public Interest Energy Research Industrial program, and Pacific Gas and Electric Company for sponsoring work performed by Lawrence Berkeley National Laboratory, Ecos Consulting, EPRI Solutions, EYP Mission Critical Facilities, and Rumsey Engineers.

[FIGURE 12 OMITTED]

REFERENCES

PG & E. 2006. High tech buildings data center airflow project, emerging technology demonstration final project report. Pacific Gas and Electric Company Emerging Technologies Program and EYP Mission Critical Facilities, Inc.

Tschudi, W., and P. Liesman. 2003. New York Data Center Case Studies.

William Tschudi, PE

Member ASHRAE

Stephen Fok

Member ASHRAE

William Tschudi is program manager at Lawrence Berkeley National Laboratory, Berkeley, CA. Stephen Fok is senior program engineer at Pacific Gas and Electric Company, San Francisco, CA.

COPYRIGHT 2007 American Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc.

COPYRIGHT 2008 Gale, Cengage Learning