Cooling Systems Efficiency in a Data Center

Modern data center equipment racks can produce very concentrated heat loads. In facilities of all sizes, from a small data center supporting office buildings to dedicated enterprise/co-location facilities, designing to achieve precise control of the airflow through the room that collects and removes equipment waste heat has a significant impact on energy efficiency and equipment reliability.

Previously, data centers were primarily focused on reliability, and not energy efficiency. On average, older data centers use twice as much energy for cooling, as was delivered to the computing equipment (this represents a PUE of 2.0 or 50% operating efficiency). In some cases, the PUE of some older sites is even worse with a value of 2.5 – 3.0. This meant that they were using far more energy for power and cooling systems used to support the computing equipment than the energy actually used by the IT equipment. From these results, we can understand that cooling typically represents the majority of facility energy use and offers the greatest opportunity for improvement. Optimizing energy usage throughout a data center can be a sophisticated exercise, but following these, few relatively simple steps can potentially deliver big savings too. Let us have a look into the areas and improvements that we can add in a data center for better cooling efficiency.

Cooling Plant Optimizations

It is said that about 38% of data center electricity needs are utilized by cooling mechanisms. Because of that itself, data centers offer a number of opportunities in central plant optimization, both in design and operation. A medium-temperature chilled water loop design using 55°F chilled water provides improved chiller efficiency and eliminates uncontrolled phantom dehumidification loads. The condenser loop should also be optimized; a 5-7°F approach cooling tower plant with a condenser water temperature reset pairs nicely with variable speed (VFD) chiller to offer large energy savings. A primary-only variable volume pumping system is well matched to modern chiller equipment and offers fewer points of failure, lower first cost, and energy savings. Thermal energy storage can be a good option and is particularly suited for critical facilities where a ready store of cooling can have reliability benefits as well as peak demand savings. Finally, monitoring the efficiency of the chilled water plant is a requirement for optimization and basic reliable energy and load monitoring sensors can quickly pay for themselves in energy savings. If efficiency (or at least cooling power use) is not independently measured, achieving it is almost as much a matter of luck as design.

1. Cooling Systems Design

First and foremost consideration in order to have the most efficient cooling systems for your data centers. Air management for data centers entails all the design and configuration details that go into minimizing or eliminating mixing between the cooling air supplied to equipment and the hot air rejected from the equipment. When designed correctly, an air management system can reduce operating costs, reduce first cost equipment investment, increase the data center’s density (W/sf) capacity, and reduce heat related processing interruptions or failures. A few key design issues include the location of supply and returns, the configuration of equipment’s air intake and heat exhaust ports and the large scale airflow patterns in the room. Improved airflow management requires optimal positioning of the data center equipment, location, and sizing of air openings and the design and upkeep of the HVAC system. While the application can vary widely, one overall objective is simple: ‘Remove the heat’. New mainstream data centers are being designed and built currently that typically have much better operating efficiencies, with a range of PUE from 1.3 – 1.6.

2. Centralized Air Handling

Early in the evolution of data centers, the typical cooling system involved multiple small air-cooled split systems with small vertical air handlers and independent integrated controls that stood in the data center room and provided cooling. Such a system was easy to install in existing buildings that were initially constructed without consideration for the high density sensible heat loads of modern electronic equipment. Now that the loads and conditioning requirements of data centers are relatively well understood, purpose-built central air handler systems can be designed to meet typical data center requirements with greater efficiency than the traditional multiple distributed units design seen in the figure below.

Better performance has been observed in data center air systems that utilize specifically-designed central air handler systems. A centralized system offers many advantages over the traditional multiple distributed unit system that evolved as an easy, drop-in computer room cooling appliance. Centralized systems use larger motors and fans, and can be more efficient. They are also well suited for variable volume operation through the use of Variable Speed Drives (VSDs, also referred to as Variable Frequency Drives or VFDs). Most data center loads do not vary appreciably over the course of the day, and the cooling system is typically oversized with significant reserve capacity. A centralized air handling system can improve efficiency by taking advantage of surplus and redundant capacity to actually improve efficiency. The maintenance benefits of a central system are well known, and the reduced footprint and maintenance traffic in the data center are additional benefits. The typical approach to a centralized system for a data center is to use a single supply air plenum fed by a few large air handlers. Depending on the size of the space and frequency of data center load changes, variable airflow boxes may be used within the distribution system to actively direct the airflow to the highest load regions. Alternatively, the system may simply be statically balanced after every major change in load. The figure below shows a centralized air handler system where the data center distribution system is fed from above.

If you would like to understand more about this approach, do read the article from PGE Page 26 which has covered a wide area of information about this.

3. Air-Side Economizer(Economize with Outside Air)

The purpose of the economizer is to reduce the amount of runtime and energy used for “mechanical cooling”(typically a compressor based system). These economizer system can be part of a water cooled evaporative system, an air cooled system or a combination of both.

The concept of a direct “air-side” economizer is simple; just bring in outside fresh air into the data center when the outside temperatures are within the temperature required by the IT equipment to “cool” the data center and then extract the hot air from IT equipment out of the building. Only when the outside air is too warm is mechanical cooling is required. Airside economizers are a duct and damper arrangement with an automatic control system that together allow a cooling system to supply outdoor air to reduce or eliminate the need for mechanical cooling during mild or cold weather. They serve a dual purpose of saving cooling energy and improving indoor air quality by supplying additional outdoor air. Airside economizers provide a perfect low-cost solution to this design requirement. By increasing the temperature in accordance with recent guidelines from ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers), data centers can take advantage of the economizer mode feature that many data center cooling systems now have, which essentially involve using outside air for cooling. The savings derived from using economizer mode can add up quickly. Combined with an air-side economizer, air management can reduce data center cooling costs by over 60%. The effectiveness of an economizer depends on loads characteristics of the building, type of HVAC system and the local climate. If you would like to know more about the airside economizers, have a look at the animated video from STULZ and detailed article from IES.

4. Water Side Economizer

Waterside economizers were existing before the invention of air-side economizers. A water-side economizer uses the evaporative cooling capacity of a cooling tower to produce chilled water and can be used instead of the chiller during the winter months(A chiller is a machine that removes heat from a liquid via a vapor-compression or absorption refrigeration cycle. This liquid can then be circulated through a heat exchanger to cool air or equipment as required). Water-side economizers can be integrated with the chiller or non-integrated. Integrated water-side economizers are the better option because they can pre-cool water before it reaches the chiller. Non-integrated water-side economizers run in place of the chiller when conditions allow. Water-side economizers offer cooling redundancy because they can provide chilled water in the event that a chiller goes offline. This can reduce the risk of data center downtime. Water-side economizers are best suited in climates where the wet-bulb temperature is lower than 55°F for 3,000 hours or more. During the water-side economizer operation, the costs of a chilled water plant are reduced by up to 70%.

IT Space

Now let us look into the possibilities that what are the areas, we can optimize to improve the cooling efficiency related to components in the white space area.

5. Eliminate Over Cooling Of Systems

In the data center, power and cooling go hand in hand. Traditionally, every watt brought into the data center to power equipment requires the second watt of power for cooling. Overcooling equipment is not an efficient use of energy. Most data centers cool equipment based on manufacturers’ power-load recommendations. Because manufacturers typically base their power consumption estimates on running peak loads all the time—a condition that is rarely met—most of the time equipment is overcooled in data centers.

Calculating accurate power loads can be tricky. Generally, busses in the data center are shared by servers and storage, making it hard to separate server and storage power requirements. To arrive at reasonable power-load estimates for specific circumstances, we can test equipment in a lab environment before deployment in our data center. It has discovered that reasonable power-load estimates for many circumstances are 30% to 40% lower than manufacturer estimates. Once we deploy systems, we monitor rack-by-rack power usage and balance the phases as needed. By constantly tuning cooling systems based on a specific experience, we can cut down the amount of energy that would be wasted by overcooling our systems.

There are clearly some legitimate reasons to keep lower temperatures; the first is a concern of loss of thermal ride-through time in the event of a brief loss of cooling, this is especially true for higher density cabinets, where an event of only a few minutes would cause an unacceptably high intake IT temperature. This can occur during the loss of utility power, and the subsequent transfer to a backup generator, which will it typically takes 30 second or less, will cause most compressors in chillers or CRAC units to recycle and remain off for 5–10 minutes or more. While there are some ways to minimize or mitigate this risk, is a valid concern. So what do you think the best option for your data center?

6. Implement Cable Management

Under-floor and over-head obstructions often interfere with the distribution of cooling air. Such interferences can significantly reduce the air handlers’ airflow as well as negatively affect the air distribution. Cable congestion in raised-floor plenums can sharply reduce the total airflow as well as degrade the airflow distribution through the perforated floor tiles. Both effects promote the development of hot spots. A minimum effective (clear) height of 24 inches should be provided for raised floor installations. Greater underfloor clearance can help achieve a more uniform pressure distribution in some cases.

A data center should have a cable management strategy to minimize airflow obstructions caused by cables and wiring. This strategy should target the entire cooling airflow path, including the rack-level IT equipment air intake and discharge areas as well as under-floor areas. Persistent cable management is a key component of maintaining effective air management. Instituting a cable mining program (i.e. a program to remove abandoned or inoperable cables) as part of an ongoing cable management plan will help optimize the air delivery performance of data center cooling systems.

7. Aisle Separation and Containment

A basic hot aisle/cold aisle configuration is created when the equipment racks and the cooling system’s air supply and return are designed to prevent mixing of the hot rack exhaust air and the cool supply air drawn into the racks. As the name implies, the data center equipment is laid out in rows of racks with alternating cold (rack air intake side) and hot (rack air heat exhaust side) aisles between them. Strict hot aisle/cold aisle configurations can significantly increase the air-side cooling capacity of a data center’s cooling system.

All equipment is installed into the racks to achieve a front-to-back airflow pattern that draws conditioned air in from cold aisles, located in front of the equipment, and rejects heat out through the hot aisles behind the racks. Equipment with non-standard exhaust directions must be addressed in some way (shrouds, ducts, etc.) to achieve a front-to-back airflow. The rows of racks are placed back-to-back, and holes through the rack (vacant equipment slots) are blocked off on the intake side to create barriers that reduce recirculation, as shown in the below picture. Additionally, cable openings in raised floors and ceilings should be sealed as tightly as possible. With proper isolation, the temperature of the hot aisle no longer impacts the temperature of the racks or the reliable operation of the data center; the hot aisle becomes a heat exhaust. The air-side cooling system is configured to supply cold air exclusively to the cold aisles and pull return air only from the hot aisles.

Aisle Separation and Containment

For most average data centers, using hot aisle/cold aisle containment to improve air-cooling efficiency is popular and easy to retrofit in existing data centers. It helps isolate and eliminate heating chaos and hot spots. While above mentioned methods are the basic levels for the separation of hot and cold air. There are many advanced systems to improve this efficiencies. I would strongly suggest you to have a look at the data center containment system article to understand more about this.

8. Blanking Panels & Rack Airflow Management (RAM)

To help better segregate the aisles, common best practice is to block off holes in rack slots with these components that we discuss and this will prevent heat leakages. While the above method is aligning the equipment to create a cold aisle and hot aisle from back and front of the rack. There are chances that some of the rack units are not occupied and kept empty due to decommission, upgrade or kept empty for future purposes. You can imagine that this can cause air leakage and mix between cold air and hot air which is a threat against cooling efficiency. The most effective method to prevent this is to use the blanking panels which will cover up

the empty spaces between rack units. In another case, there can be instances where there is an excessive gap between the rails and sides of the cabinet. This can also cause the mixing of hot and cold air which can be prevented by using Rack Airflow Management(RAM) material. There are many situations where cables have to pass from the front side of the device to the backside which can again cause extra spaces between devices. This challenge can be resolved with the use of Pass Through Blanking Panels. Using these materials is not only helps you to fill the empty spaces it will also increase the strength of your rails and protect your server equipment by blocking access through open rack space. All the materials can be of various shapes and sizes(rack units).

Don’t forget to check out these videos from Upsite Institute which describes the live implementation and usage of the Blanking Panels, Pass Through Blanking Panels and Rack Airflow management(RAM).

9. Direct Liquid Cooling

Direct Liquid Cooling (DLC) uses the exceptional thermal conductivity of liquid to provide dense, concentrated cooling to targeted areas. Direct liquid cooling refers to a number of different cooling approaches that all share the same characteristic of transferring waste heat to a fluid at or very near the point it is generated, rather than transferring it to room air and then conditioning the room air. By using DLC and warm water, the dependence on fans and expensive air handling systems is drastically reduced. I would strongly recommend you to have a look at the two main methodologies related to this that are Cool-Central Liquid Cooling(using water as heat removal agent) and Liquid Immersion Cooling System(Using dielectric liquid as heat removal agent) Fujistu. From this itself you would be able to imagine that advantages of liquid cooling where we can eliminate many of the components involved in Cooing Infrastructure which can save us a lot that reduces the electricity used for cooling to 70% – 95% less than the normal infrastructure, shrink down the entire data center space 10x, noise reduction(due to fans), eliminate many of the human resources that need to maintain traditional infrastructure and dramatically simplified design. Many of the manufacturers are providing this solution however it’s yet to be largely adopted by industries due to the feat of adopting new technologies so fast. Hopefully, this technology can drive a better future. Don’t forget to check out this Live Demo too.

SUMMARY

The redundancy level and environmental control of a data center determine the potential energy efficiency of the data center infrastructure. Carefully consider the acceptable availability and reliability levels for the data center as higher reliability involves higher built-in redundancies at all system levels. By optimizing certain operations and processes — temperature control and cooling primarily — it’s possible to improve the efficiency of a data center and reap cost savings in the process. In some circles, this is almost unheard of, as data centers tend to be a powerhouse of consumption and excess. They demand huge loads of energy and must remain online at all times of the day and night, which requires incredible levels of reliability and performance. Still, there’s no reason not to at least attempt to mitigate some of that excess. Ultimately, it is a question of whether the energy (and cost) saved, is worth the risk (perceived or real) of potential equipment failures due to higher or lower temperatures (and perhaps wider humidity). What are the ways you think that you can adopt.

Knowledge Credits: pge.com

Have a comment or points to be reviewed? Let us grow together. Feel free to comment.

One thought on “Cooling Systems Efficiency in a Data Center

Leave a Reply