Teach you to eliminate the drawbacks of virtualization on data centers

  
                  

Not long ago, some software engineers discovered that running a server with an application was a waste because it consumed a lot of resources but only used a small amount of processing power. One of the reasons that IT departments are taking advantage of server virtualization today is to reduce power consumption in the data center. Unfortunately, they don't know that the same reasons that prompted them to adopt virtualization also affected the efficiency of their data centers. Why do good things like server virtualization have a negative impact on data center efficiency? This article will help you understand this Pandora's box and suggest some suggestions to avoid these problems.

A server has power, memory, hard drives, fans, boards, processors, and other components that consume power. These components produce some form of valuable computing power. All of these components consume power immediately after the server is powered up, even when there are no computing tasks. This is known as fixed loss. The fixed loss of the server generally accounts for 33% to 50% of the maximum power consumption of the server. As the processor utilization increases, the power consumption also increases, but the fixed loss does not change. Unfortunately, servers that don't use virtualization typically run an application, and the average processor utilization is about 5% to 10%. This means that the cost per kWh of this server is much higher than 10 runs. The host server of the application/operating system (that is, the virtual machine).

This situation is very similar to what happens in the data center. The first day a data center starts running is a larger than the current usage scale to cope with the maximum IT load in the future. Before the maximum load arrives, the data center runs lighter IT loads, and uninterruptible power supplies, transformers, chillers, fans, and air conditioners consume a fixed amount of power. The important thing to remember is that these fixed data center losses account for a large portion of the entire electricity bill. IT administrators recognize that running servers (that is, more efficient) can be performed more efficiently by placing more virtual machines on a single host. To do this, they actually increase the computational effort on the processor to increase the efficiency of the server. This is an excellent solution for making the most efficient use of IT resources. However, IT administrators do not realize that they also reduce the energy efficiency of the data center. In other words, through virtualization, most electricity bills are tied to the physical infrastructure. Data centers are running at low capacity (typically 50% to 60% of workload) before virtualization is implemented. This situation is even worse after some virtual servers are unplugged from the server.

How can IT administrators improve the efficiency of their data center infrastructure in a virtualized environment? The two most effective improvements are the correct size and conversion to refrigeration close to the heat source.

The correct size includes replacing the old single-block uninterruptible power supply with a scalable, high-efficiency uninterruptible power supply. The scalable uninterruptible power system always runs efficiently because it has enough power modules installed to support this IT workload. For example, a data center with 61 kW of IT workload will be supported by seven 10 kW power modules. These power modules will be embedded in a modular, uninterruptible power supply unit. If this IT workload increases to 75 kW, this uninterruptible power supply module will add an additional 10 kW power module without shutting down. Switching to a scalable, high-efficiency uninterruptible power supply can reduce operating power costs by 50% to 70%.
Switching to a heat source close to a heat source involves using a modular queue air conditioner to shut down or replace some or all of the traditional environment-based air conditioners. Conventional refrigeration equipment requires an oversized fan motor to overcome the long distance that cold air travels before entering the IT equipment. Now, some obstacles are often found under the raised floor, and it is very difficult to properly cool a server in a high-density rack. On the other hand, queue-based cooling devices are installed in the same queue as the server rack. This means that the fan motors are much smaller because they only need to supply air to IT equipment that is only a few feet away. Fans in conventional cooling equipment typically operate at a fixed speed, regardless of the amount of IT work. It's like driving at 6,000 rpm, regardless of driving, not to mention refueling. The modular queue-based air conditioner is equipped with a variable speed fan that matches the IT workload, providing another way to properly select the data center size, reducing electricity bills. By quenching the heat source and using variable speed fan motors, the queue-based cooling system can reduce the fan motor's power consumption by approximately 65%.

Modular Scalable Infrastructure with IT Workload increase. Data centers that use this infrastructure can save more than 60% of total electricity bills (or reduce the scale of virtualization applications). In addition to saving energy, the use of modular uninterruptible power systems and in-queue cooling equipment has two main benefits: targeted availability and improved capacity and change management.

Multiple virtual machines on a single host server will greatly increase the importance of the host server because a power or thermal event can destroy the entire virtualized rack (if there is no ready backup strategy) . With modular power and cooling, a single rack can be targeted to use N+1 uninterruptible power supplies or N+1 in-queue cooling equipment to achieve specific levels of availability. This targeted availability will save unnecessary high power and cooling costs and reduce unwanted IT racks and equipment.

The last benefit of using modular uninterruptible power systems and in-queue cooling equipment is significant improvements in capacity and change management. Think about the last time you had to install a new server and try to figure out which rack the server should be installed on. The problem in mind is "Which rack has enough power and cooling capacity to plug in a new server?", or "I will destroy the cooling of the specific rack by putting additional servers in the rack." Is there any redundancy? "Or "additional servers will overheat this rack?" These problems are not possible with old-fashioned data centers because of the unpredictable nature of cooling under the floor and the lack of equipment to measure power.

The modular power system dedicated to the specific area of ​​the rack is able to report the status of the power capacity to the centralized management system in real time. Similarly, due to the short air and temperature monitoring functions of the air, the cooling equipment in the queue is also designed to be predictable. When real-time information from these systems is sent to a centralized capacity and change management application, the guesswork and wasted time to install a new server is eliminated. The predictable power and cooling infrastructure also allows a capacity management system to implement an analysis of "what action will be taken if something happens" to understand the impact of changes before doing anything.

For all of these benefits of virtualization (including reduced electricity bills), it presents some challenges that force IT staff to understand their power and cooling systems well to ensure that the host server needs to be higher. Availability. This work has been greatly simplified through the use of a scalable modular power and cooling system, while leveraging comprehensive rights to reduce billing for data centers in the data center.

Copyright © Windows knowledge All Rights Reserved