IT Systems Efficiency in a Data Center
In the past, IT equipment manufacturers and their data center operators were focused more on maximum performance, not energy efficiency. While the newest generations of computing hardware have continued to increase in performance, they have also become highly focused on energy efficiency. In fact, it has been shown that in many cases the cost of energy for an older commodity server is higher over a three year period than the cost of the server itself. This is especially true in older data centers when the cost energy for the supporting power and cooling infrastructure is added to the total energy cost (See Understanding PUE ). The EPA(United States Environmental Protection Agency) estimates that servers and storage account for 50% of all power usage in a data center. Hence this is one of the major areas that you need to work on to have an overall energy efficiency for the data center. Let us look into some of the areas which you can consider for energy efficiency.
1. Virtualize more workloads
Server virtualization is a hot trend right now, and for good reason, this can significantly extend the life of a data center by producing notable savings in space, power, and cooling. To get the full benefits of server virtualization you need a storage infrastructure that provides pooled networked storage. The same economics apply to storage virtualization: fewer, larger storage systems provide more capacity and better utilization, resulting in less space, power, and cooling.
In the process of implementing storage and server virtualization, NetApp has moved to more energy-efficient storage systems. They replaced 50 older storage systems with 10 of the latest storage systems running Data ONTAP® 7G. Upgrading to the latest technology brought these benefits:
- Storage rack footprint went from 25 to 6 racks.
- Power requirements went from 329kW to 69kW.
- Air conditioning capacity requirements went down by 94 tons.
- Electricity costs to power those systems went down by $60,000 per year.
Isn’t the results are amazing to have a goal of more energy efficient data centers ? So this simple study is good enough for us to understand what is the importance of server/storage virtualization in a data center. Right?
2. Identify and Kill Zombie Servers
Very strange name at least for some of you, right? Well these Zombies may be existing in your data centers too. Let us understand what is this and how can we get rid of these issues.
Thanks to a fast-moving business with constantly shifting operations, staff and processes, certain assets are either overlooked or forgotten now and then. This leads to something called a zombie server, a zombie server is a physical server that is running but has no meaningful communications or visibility and contributes no compute resources; essentially, it consumes electricity but serves no useful purpose. Research shows that 25 percent of physical servers and 30 percent of virtual servers are comatose, or zombies.
Zombie servers are often created because user-requested applications end up getting no use or almost no use (typically defined as under six percent). Other causes include redundant or legacy applications and services that have been replaced. An estimated one in three servers in North America falls into the “undead” category.
Generally, they are not shut down because there’s no paper trail about what they contain or what they’re used for, meaning managers are afraid to hit the kill switch. To deal with this problem properly, everything must be documented appropriately, and monitoring tools must be put in place to offer direct oversight as to what servers or configurations are necessary.
According to a study conducted by the consulting firm Anthesis Group and Jonathan Koomey, a research fellow at Stanford University, there are approximately 3.6 million zombie servers in the United States; worldwide, the total could be as high as 10 million. Theoretically, four gigawatts of power could be saved by killing zombie servers. Based on calculations by TSO Logic, a company with 1000 servers could net savings of $300,000. AOL’s five-year project to purge its sites of zombie servers netted the organization $10 million and in just one year resulted in a 35 percent reduction in its carbon footprint.
IT teams need to take a proactive approach to identify and stop the growth of these silent threats. In addition to wasting energy and money, these unchecked zombie servers are vessels for malicious attack, proving the perfect under-the-radar location for external threats. Unless servers fail, administrators have no reason to check for abnormal energy use by individual servers. This will cut down on energy expenditures, reduce inefficiencies, and prevent harmful data breaches. To reduce the server sprawl associated with zombie servers, multiple low-utilization servers may be combined into single virtual servers. Servers that are entirely unused can sometimes be repurposed. Otherwise, IT should just kill the zombie — by pulling the plug.
3. Fill Empty Spaces
When servers are removed from a rack due to decommissioning or functionality upgrade, this can cause empty rack unites(RU) between devices. Ignoring this empty spaces can cause unwanted wastage of rack units. You know that nowadays, most of the rack servers and networking devices are utilizing only 1U or more. Where you have to make sure that you are utilizing these unused empty spaces within these racks. Think of a situation where you have 10 free rack units free in a rack and without utilizing this space you are purchasing a new rack, additional power, cooling and extra rack space.
4. Energy Efficient Hardware
As discussed initially, it has been shown that in many cases the cost of energy for an older commodity server is higher over a three year period than the cost of the server itself. This is especially true in older data centers when the cost energy for the supporting power and cooling infrastructure is added to the total energy. The EPA(United States Environmental Protection Agency) estimates that servers and storage account for 50% of all power usage in a data center.
To help simplify and comparatively quantify the hardware aspect of energy usage and efficiency and promote this concept, the U.S. Environmental Protection Agency (EPA) instituted an “Energy Star” program for data center IT equipment. The Energy Star program initially introduced the first version of the Server specification in 2009 and continues to update and expand its list of included equipment.
According to the U.S. EPA “Computer servers that earn the Energy Star will, on average, be 30 percent more energy efficient than standard servers.” However, it should be noted that in some cases the energy requirements for Energy Star rated servers such as the widely used “1U” volume server, (with a single CPU, one hard drive and one power supply) can be substantially better than the average figure of 30%. In many cases it could use as much as 80% less energy, when compared to a comparably equipped 2–3 years old typical “commodity” server. As such, an IT hardware refresh can significantly reduce the total energy required in the data center, and in fact, may offer a very short ROI time frame.
5. Turn on the CPU’s power-management feature
More than 50 percent of the power required to run a server is used by the central processing unit (CPU). Chip manufacturers are developing more energy-efficient chipsets, and dual- or quad-core technologies are processing higher loads for less power. But, there are other options for reducing CPU power consumption.
Several CPUs have a power-management feature that optimizes power consumption by dynamically switching among multiple performance states (frequency and voltage combinations) based on CPU utilization — without having to reset the CPU.
When the CPU is operating at low utilization, the power-management feature minimizes wasted energy by dynamically ratcheting down processor power states (lower voltage and frequency) when peak performance isn’t required. Adaptive power management reduces power consumption without compromising processing capability. If the CPU operates near maximum capacity most of the time, this feature would offer little advantage, but it can produce significant savings when CPU utilization is variable. If a data center with 1,000 servers reduced CPU energy consumption by 20 percent, this translates into an annual savings of $175,000.
Many users have purchased servers with this CPU capability, but haven’t enabled it. If you have the feature, turn it up. If you don’t, consider it when making future server purchases.
6. Use IT equipment with high-efficiency power supplies
After the CPU, the second biggest culprit in power consumption is the power supply unit (PSU), which converts incoming alternating current (AC) power to direct current (DC) and requires about 25 percent of the server’s power budget for that task. Third is the point-of-load (POL) voltage regulators (VRs) that convert the 12V DC into the various DC voltages required by loads such as processors and chipsets (see Figure 1). Overall server efficiency depends on the efficiency of the internal power supply and voltage regulation. The typical PSU operates at around 80-percent efficiency — often as low as 60 or 70 percent. In a standard server, with the PSU operating at 80-percent efficiency and voltage regulators operating at 75-percent efficiency, the server’s overall power-conversion energy efficiency would be around 60 percent.
Several industry initiatives are improving the efficiency of server components. For example, ENERGY STAR® programs related to enterprise servers and data centers, and 80 PLUS® certified power supplies are increasing the efficiency of IT equipment.
The industry really took note when Google presented a white paper at the Intel Developer Forum in September 2006, indicating it increased the energy efficiency of typical server power supplies to at least 90 percent, up from 60 to 70 percent.
The advent of the so called “blade server” which is a central chassis with redundant shared power supplies (which improves energy efficiency), that can hold many individual server “blades,” which allows packing more computing power into a smaller space. The blade server has also helped drive server consolidation and virtualization projects for many organizations. The initial cost of such an efficient power supply unit is higher, but the energy savings quickly repay it. If the power supply unit operates at 90-percent efficiency and voltage regulators operate at 85-percent efficiency, the overall energy efficiency of the server would be greater than 75 percent. A data center with 1,000 servers could save $130,000 on its annual energy bill by making this change.
7. Adoption to Cloud Computing
Yes, you heard it right. How the adoption of the cloud can be an energy-efficient method?
The cloud supports many products at a time, so it can more efficiently distribute resources among many users. That means we can do more with less energy — and businesses can too. In 2013, Lawrence Berkeley National Laboratory published research indicating that moving all office workers in the United States to the cloud could reduce the energy used by information technology by up to 87%. Let me tell you the case of Google Cloud, they claim that specifically to Google products, by switching to Google Apps, companies have reduced office computing costs, energy use, and carbon emissions by 65-90%. Additionally, businesses that use Gmail have decreased the environmental impact of their email service by up to 98% compared to those that run email on local servers. Because of their energy efficiency efforts, their cloud is better for the environment. This means businesses that use their cloud-based products are greener too.
SUMMARY
The entire IT industry is now driven to improve energy efficiency. The IT hardware manufacturers have recognized that not only do they need to inherently improve the energy efficiency in their products, but they are also well aware that the largest use of energy in the data center facility itself is used by the cooling systems. They have worked to make their hardware more robust to allow it to operate at higher temperatures, which in turn reduces the cooling system’s energy requirements. To encourage the use of more efficient servers, storage devices, network equipment, and power supplies, an organization must illustrate the clear connection between equipment energy usage and operating cost to the people who make the equipment purchasing decisions. With many types of equipment becoming commodity items and with small differences in price heavily impacting the selection, it is essential that the total cost of ownership of low-efficiency IT equipment be recognized in the selection process. Let us work together for a bright energy-efficient environment.
Have a comment or points to be reviewed? Let us grow together. Feel free to comment.