Data center energy consumption reduction Eight extreme methods revealed

  
                  

Due to the current economic downturn, data center managers have to meet demanding business needs with limited funding budgets. They have been doing everything possible to cut operating costs, and currently the fastest growing data center operating costs, the largest share of which is energy costs, and most of these energy is consumed by servers and cooling systems.

Unfortunately, most of today's efficient energy-saving technologies require considerable up-front investment and will pay off in a few years. However, people have neglected some technologies that require little cost because they seem unrealistic or too extreme. The eight energy-saving methods we have listed below have been tested in the actual data center environment and have proven to be very effective. Some of these methods require almost no investment and can be used immediately. Other methods may require some funding, but the return on investment cycle is shorter than the traditional return on IT capital expenditure.

Data center energy efficiency is measured by energy efficiency (PUE), which means that the lower the value, the better. 1.0 is an ideal value. PUE refers to the ratio of the total power consumption of the data center to the power consumption converted into an effective computing task. 2.0 means that only 1 watt of the 2 watts of power input to the data center is input to the server, and the lost power is converted to heat, which causes the traditional data center cooling system to consume power to dissipate heat.

The following techniques may not lower your PUE, but you can evaluate their efficiency by checking monthly bills. Saving money is the real purpose.

Among the methods we listed, you can't find solar, wind or hydrogen energy. Because these alternative energy sources require a large investment in advanced technology. In the current economic crisis, this is undoubtedly unable to achieve cost savings immediately. In contrast, the following eight methods do not require any other complicated technology other than fans, ventilation, and piping.

These 8 methods are:

Extreme Energy Saving Method 1: Increase the temperature setting. You can use this simplest energy-saving method this afternoon: increase the temperature setting of the data center thermostat. The traditional wisdom is that the data center temperature should be set below 68 degrees Fahrenheit. It is generally believed that this temperature setting can extend the life of the device, and the administrator can get more reaction time in case the cooling system fails.

But experience tells us that if a server component fails, especially if the hard disk fails, the operating temperature will rise. But in recent years, the IT economy has crossed an important threshold: server operating costs often exceed acquisition costs. This makes the priority of cutting operating costs higher than hardware protection.

At the GreenNet conference last year, Google's "Green Energy Tsar" Bill Weihl introduced Google's experience in improving data center temperature settings. He said that 80 degrees Fahrenheit is the new safe setting temperature. However, your data center needs to first meet a simple prerequisite: to isolate the cooling air for cooling from the hot air generated by cooling as much as possible, and if necessary, use thick plastic curtains or insulation panels.

Although Google claims that 80 degrees Fahrenheit is a safe temperature, Microsoft's experience tells us that we can set the temperature higher. Microsoft's data center in Dublin, Ireland, uses a "no-cooler" mode, which uses free outside air for cooling, and the server inlet air temperature is 95 degrees Fahrenheit. It should be noted that as the set temperature increases, there will be a point of diminishing returns, as an increase in server fan speed will result in an increase in energy consumption.

Extreme Energy Saving Method 2: Turn off the server that is not in use. Virtualization shows us the energy savings that make unused processors, hard drives, and memory hibernate. So why not shut down the entire server? Is the "commercial flexibility" brought about by the server in a ready state equal to the energy cost they consume? Have you found something that can shut down the server, and if you shut down the server, you will get the lowest energy consumption -0, at least for the server. However, the first thing you need to face is the objections raised by those dissidents.

They often think that restarting will reduce the average server life, because the voltage will be loaded on some non-hot plug components such as motherboard capacitors. This idea has proven to be wrong: the reality is that the components of the server are the same components used by some frequently-initiated devices, such as cars and medical devices. There is no evidence that frequent starts will reduce the server's MTBF (mean time between failures).

The second misconception is that server startup takes a long time. In this case, you can reduce the boot time by turning off the diagnostic check at startup, booting from the hard disk image, and using some of the hardware's hot boot features.

The third objection is: If we have to start a server to accommodate the increased load, then the user doesn't want to wait -- no matter how fast the boot is. However, even if the application is very slow, most application architectures will not reject new users, so users are not aware that they are waiting for the server to start. It turns out that when an application is affected by the number of users, as long as it can send users the message "We are starting more servers to increase your application speed", users are willing to wait.

Extreme Energy Saving Method 3: Use free outside air for cooling. A higher data center temperature setting will prepare you for a second energy saving method called so-called free air cooling. This approach uses a lower temperature outside air as a source of cooling air, eliminating the need for expensive coolers, which is used by Microsoft's data centers in Ireland. If you try to maintain the temperature at 80 degrees Fahrenheit and the outside air is only 70 degrees Fahrenheit, simply blow the outside air into the data center to cool.

Compared to Method 1, this method requires some effort. You must rearrange the ventilation ducts to allow outside air to blow into the data center. In addition, basic safety equipment such as air filters, dryers, fire dampers and temperature sensors are required to ensure that external air does not damage delicate electronic equipment.

In the trial, Intel successfully reduced energy consumption by 74% by using external air cooling. In the two sets of servers, the first group uses a conventional cooler to cool down, and the second group uses an external air cooling system combined with a cooler to cool the two groups of servers for ten months. Among them, the second group of servers used 91% of the time to use external air cooling. Intel found that a large amount of dust was accumulated on the server using external air cooling, which indicates that the system needs to install a fine particle impurity filter in addition to the large particle impurity filter. In addition, due to the frequent replacement of the filter, it is necessary to use a filter that is easy to clean and can be used repeatedly in actual use.

Despite the heavy ash, the temperature range is large, but Intel found that the server failure rate using external air cooling did not increase. Accordingly, taking a data center of 10 megawatts as an example, cooling costs can be saved by $3 million per year and 76 million gallons of water. In some areas, the price of water is very high.

Extreme Energy Saving Method 4: Heat the office with hot air generated by cooling the data center. You can double the energy savings by heating the office with hot air from the data center cooling. Similarly, you can use the relatively low temperature air in the office to cool the data center. In the cold weather, you can get enough heating. At the same time, the additional cooling air demand in the data center can be fully obtained from the outside.

Unlike external air cooling, you may no longer need the current heating system. In other words, you no longer need a heating stove that is as tall as one person. You don't have to worry about the harmful effects of data center electronics when it generates heat. Today's servers complying with the Directive on Restricting the Use of Certain Hazardous Substances in Electrical and Electronic Equipment (RoHS) are no longer using materials such as cadmium, lead, mercury and polybrominated compounds that pollute the environment.

As with external air cooling, the only technology you need is experience in heating, ventilation and air conditioning systems (HVAC): fans, ventilation ducts and thermostats. You will find that your data center can provide enough heat to replace the traditional heating system. IBM's data center in Uitikon, Switzerland, provides free heating for local residents, saving energy costs equivalent to the heating costs of 80 households. TelecityGroup Paris even supplies the hot air generated by the cooling of the data center to supply greenhouses to support climate change research.

Copyright © Windows knowledge All Rights Reserved