Too much to do: how to avoid excessive server integration

  
                  

If you have implemented server virtualization, you must be aware of the importance of integration. Integration is the most important consideration in virtual data centers. Simply put, consolidation increases the available computer resources and allows them to be on the same physical server. Running more virtual machines, but even the most powerful and most suitable server for virtualization, integration is limited. Over-integration is not a good thing. Administrators in virtual environments must consider the impact of data center over-integration.

Integration has become so ubiquitous that we almost forgot the essential meaning of integration: integration can generate economic benefits - saving money. In a traditional non-virtualized environment, one server usually runs only one application. The server rarely uses more than 10% of the computer resources. Each new service or application requires certain server hardware, network, power, cooling, and maintenance. cost.

Virtualization encapsulates multiple workloads into a single physical server, allowing administrators to use more CPU, memory, and I/O resources, requiring fewer physical servers, so power and cooling costs It also declined.

In addition, workloads can be moved between physical servers using live migration, allowing real-time workload balancing, minimizing downtime due to hardware maintenance or repairs, and the Windows Server 2008/R2 Data Center Edition licensing program enables Hosting virtual machines on the same server becomes more cost effective. Integration can greatly increase computational efficiency and save companies more money.

Server over-integration

It’s a matter of fact that server integration is a good thing, but over-integration is not only unhelpful, but harmful, not good for data centers and users, but reality Many organizations will know what they are doing. The problem is that virtualization is too easy.

In the past, running a new application meant expensive server and labor costs, and it had to be reviewed and regulated by the finance department, a process that could last for weeks or even months. Virtualization has revolutionized this situation. Now it takes only a few minutes to create a virtual machine on a physical server. It doesn't involve buying and installing new hardware. The cost is only the operating system and application license fees. IT decisions and responsiveness are faster. The need to allocate computing resources has become fashionable.

Some organizations often overload servers, and their goal is to squeeze the server's computing resources 100%. Todd Erickson, president of Technology Navigator, said: "If I purchased a four-socket server and Windows Server Data Center Edition license, how many virtual servers can I create on it?" Obviously, there is no uniform answer to this question.

Over-integration first affects performance and stability. Virtual machines snatch each other's limited computing resources, and applications are built up. Resources such as backup, disaster recovery, and other data protection tools are resource-intensive. Large households, they are important participants in resource competition. While a few extreme applications don't do much damage, there are too many extreme applications, virtual machines can crash, and even the entire server can crash. Most administrators understand this, with business revenue, customer satisfaction, and data security. Closely related.

A host host that hosts a large number of virtual machines will affect all virtual machines. These virtual machines must be restarted, or left on the original server, or moved to other servers in the data center. The recovery process will be given to the entire virtual environment. Bring tremendous pressure.

Over-integration can also hurt live migration, and while most administrators don't allow automatic migration, the benefits of virtualizing workloads on demand are unquestionable. However, it is almost impossible to transfer the workload when the server load reaches its limit. Imagine what happens when the server fails. You cannot start the affected virtual machine on other servers because no computing resources are available. , only restart after the server is repaired.

Many experts recommend that server consolidation be moderate. In general, server resource utilization can reach 60-70%, the real percentage depends on your business situation, but the ultimate goal is to keep a certain percentage of computing resources idle, because restarting virtual machines is very resource-intensive. In addition, reserved resources can be used to migrate virtual machines between servers, balance workloads, or support maintenance.

Preventing Server Over-Consolidation

The easiest way to prevent server over-integration is to implement the right IT best practices from the start. Erickson points out the dangers of over-consumption of server computing resources. A thin provisioning solution, such as vSphere and XenServer, supports memory overcommitment, allowing administrators to supply more memory than physical server memory.

Erickson said: "No one has adopted thin provisioning as a best practice. If you are implementing thin provisioning, you are likely to have reached the limit of integration. The biggest problem is that this will affect the performance of the virtual machine or Stability."

Many companies like to fill virtual machines with physical servers, and maximize the use of resources. For example, a 48GB server with a physical memory allocates 52GB of memory to the virtual machine. Using 4Gb of memory, although this range is acceptable, the server has been over-consolidated and the risks are increasing over time.

Appropriate management tools can help identify over-integrated servers, allowing administrators to resolve problems before they get worse, and IT departments shouldn't wait until resources are used up.

Scott Roberts, director of information technology at South Windsor, Connecticut, said: "You should open the management console every day to see how resources are being used. Don't wait until the user feedback is in front of you."

The information provided by the management console can also help with other important tasks such as workload balancing and capacity planning, analyzing the distribution of virtual machines and the resources they need, and then generating recommendations that can sometimes be discovered. For sloppy or inefficient workload deployments, capacity planning needs to assess changes in resource usage over time to ensure resources are available for future needs.

Assessing expenses, preventing abuse

Another way to limit the growth of virtual machines is to consider or re-evaluate the organization's deduction strategy. Deductions are a very difficult problem in many organizations. Consolidating multiple virtual machines on a small number of servers only complicates the problem, and organizations can distribute costs to the various departments that use those computing resources. Only when they are paid for, can the department realize its value and not abuse the calculations. Resources.

Implementing virtual machine lifecycle management also helps prevent waste of resources caused by virtual machine sprawl. VMware Lifecycle Manager can identify virtual machines that may not be needed. Removing these virtual machines can be Other virtual machines free up more resources. Not only can save memory and CPU, remove unnecessary virtual machines can save storage space, eliminate redundant backup requirements, and maybe delay the purchase of new servers, saving organization IT expenses.

The last thing to consider is the role of new hardware in virtual server consolidation. A good hardware replacement cycle is good, but it is also easy to fall into the trap of server over-integration. In addition, when considering hardware updates, consider the best time to upgrade your network, such as 10GbE or FCoE, because highly integrated virtual servers must have more network bandwidth support.

One of the extended readings: When should we avoid integration?

Virtualization platforms have made great progress in the past few years, and virtualization products from VMware, Microsoft and Citrix can support almost Any type of workload. In general, all modern applications can run on virtual machines, but IT administrators should be cautious when making plans to move to virtual platforms.

Old applications may have problems moving to virtual platforms, especially those that are custom or require special hardware support, because virtualization imposes an abstraction between the application and the underlying hardware. Layers, applications that require access to special hardware can fail or experience unacceptable performance issues.

One solution is to rewrite the application into a hardware-independent version using a new programming language, but this method is costly and time consuming. The second option is to replace the custom application with a commercial product that can be modified by itself, but the time and cost required are also high, and the time and cost of modifying an existing custom application may be similar.

In reality, the easiest way is to have these applications run on non-virtualized physical servers. In addition, applications such as SQL Server or Exchange Server that consume resources can be placed on a virtual machine, but there are performance issues if other virtual machines are deployed on the same physical server. Therefore, resource-intensive applications should implement integration to a minimum.

Testing is an important part of the integration process. A dedicated lab environment should be set up from the production environment to verify that the application is suitable for deployment into the virtual environment, to determine the resources, performance and Interoperability in a virtual environment.

Extended Reading 2: Tracking Integration and Performance

Regardless of how you integrate your servers, you should use benchmarking tools or other tools to derive performance reports or computing resource levels. The performance of the application or the user experience will be very good.

Review the benchmark report when an alert or user complaint is received. The root cause of the problem can be quickly determined by the difference in the benchmark counter. The administrator can observe the changes in resource usage and determine whether an upgrade or new purchase is required. The server, or rebalance the workload or make other capacity planning decisions.

In short, the more virtual machines hosted on the server, the more applications are affected by the adjustments to the server. Fortunately, the three major virtual platforms already have corresponding benchmarking and reporting tools, and many Third-party tools are available, such as Novell's PlateSpin Recon and VK ernel Capacity Analyzer.

Copyright © Windows knowledge All Rights Reserved