Eight energy-saving methods in the data center Green and more economical

  
                  

Today's data center managers are struggling in the whirlpool of how to meet the needs of companies in increasingly competitive markets within the limited budgets of the software economy. They are looking for ways to reduce the cost of running their businesses. Currently, the fastest growing and highest percentage of data center operating costs is the energy expenditures of servers and cooling systems.

Some of the most effective energy-saving technologies require a large amount of front-end investment, and it takes years to get a return on investment. But some of the techniques that are often overlooked are costly. The reason these technologies are often overlooked is that they seem impractical or too radical. The eight energy-saving methods listed below have been tried and tested in a real-world data center environment and have proven to be effective. Some of the technologies you apply to production environments can immediately return on investment; other technologies may require capital investment, but the return on investment is faster than traditional IT funds.

The data center energy efficiency measurement standard is the energy utilization ratio (PUI), the lower ratio indicates better utilization, and 1.0 is the ideal target for energy efficiency. Energy efficiency is the contrast between the total cost of data center power and the number of tasks converted to effective computing. The common value of the 2.0 ratio means that the 2 watts of energy originally consumed by a server in a data center dropped to 1 watt—the loss is that energy is converted into heat, which in turn requires energy to be removed by traditional data center cooling systems.

From a simple measurement standard, you must look at and define energy efficiency in this way: it is a measure of electricity efficiency. This method of measurement does not take into account other energy sources, such as ambient temperature or hydrogen fuel cells, many of which can be used to reduce overall energy consumption. The techniques described below may or may not reduce your measurable energy efficiency, but you can more easily assess their effectiveness by reviewing monthly electricity bills. In any case, this is the real concern of business users.

You won't find an introduction to solar, wind or hydrogen in this article. These optional energy sources require a large amount of investment before the technology is put into practical use, which will greatly reduce the cost saving advantages in the current economic crisis. In contrast, none of the following eight technologies require something more complicated than fans and pipes.

These eight technologies are as follows:

1. Increase the temperature of the data center

2. Turn off the unused server

3. Use free Cooling the outside air

4. Using data center heat to heat the office area

5. Using a solid state drive to run a highly active read-only data set

6 Use DC power in the data center

7. Introduce heat into the ground

8. Discharge heat into the sea through the pipeline

One of the basic energy-saving methods: increase the data The temperature of the center

The easiest way to save energy is that you can also make a move to Massa this afternoon: increase the temperature of the data center thermostat. Typically the data center temperature is set to 68 degrees Fahrenheit or less, which logically extends the life of the device and allows the user to have more time to respond when the cooling device fails.

Experience has shown that when a server hardware fails, especially if the hard drive fails, it does increase the operating temperature of the system. But in the recent year, the IT economy has crossed an important tipping point: server operating costs currently generally exceed procurement costs. This may make it more practical to keep hardware than to cut operating costs.

At the GreenNet conference last year, Google’s energy director Bill Weihl introduced Google’s experience in improving data center temperature. Weihl said that 80 degrees Fahrenheit is safe to use as a new setting, but the data The center needs to meet a simple prerequisite: separate the cold air flow from the hot air flow as much as possible, using curtains or solid barriers if needed.

Although 80 degrees Fahrenheit is a safe temperature upgrade, Microsoft's experience shows that you can adjust the temperature even higher. Microsoft's data centers in Dublin and Ireland operate in a "less-cooling" mode, using free external air cooling, and the server inlet temperature is about 95 degrees Fahrenheit. However, please note that when you increase the temperature, the server fan speed required to increase energy consumption becomes higher, so there is a return attenuation.

Basic Energy Saving Method 2: Turning Off Unused Servers

Virtualization technology shows the energy savings of integrating unused processors, hard drives and memory. So why not shut down the entire server? Will this increase the “business flexibility” that matches the energy costs they consume? If you can find instances where the server is down, you can minimize the power consumption of these servers to zero. But you must first take into account the opinions of the opponents.

First of all, it is generally believed that because the pressure is accumulated on non-domain switching components such as motherboard adapters, the energy cycle reduces the server's life expectancy. This leads to the conclusion that in real-world applications, servers are built from the same components that are used daily in those devices that frequently occur in energy cycles, such as cars and medical devices. There is no evidence that any reduced MTBF (ie, mean time between failures) is a result of the energy cycle server.

The second objection is that the server took too long to start. However, you can usually speed up server startup by turning off unnecessary import time diagnostic tools, importing and using the hot-start functionality provided in some hardware directly from the snapshot image that is already running.

The third objection is: If we have to restart a server to accommodate the hard load, then no matter how fast the import is, users can't wait. However, most application architectures are unfamiliar to new users because the process requirements are much slower, so users don't know they are waiting for the server to start. The application software does reach the limits of the number of users, and if the user knows "we start more servers to speed up the response to the demand" they might be willing to do so.

Copyright © Windows knowledge All Rights Reserved