Avoid running out of servers Parsing virtualized capacity planning

  
                  

One of the key benefits of virtualization is that it can greatly increase the utilization of server resources. But because of unmonitored workload migration, uncontrolled virtual machine increases, and unexpected jumps in resource requirements can drain the server (even the most powerful) and leave enough work to cherish resources, so this high utilization The rate also has a downward trend.

IT administrators in virtual data centers need to implement and follow an integrated capacity planning protocol to ensure that critical resources are available to those who need them.

Capacity Planning and Server Performance

First, understand the highest requirements for each application. This may be determined by monitoring the use of resources over time. Also, pay close attention to the virtual workloads running on each server so that the total resource requirements (such as CPU cycles or memory) do not exceed the available resources. Third, continue to monitor resource utilization and focus on changes that may need to allocate more resources or rebalance workload distribution across multiple virtual servers.

Monitoring server resources and workload allocation failures can result in resource shortages. In most cases, shortages often fail to meet one or more virtual machines and cause performance issues such as lagging processing, long login times, and low storage access. Unless administrators are diligently monitoring resource utilization, users are generally the first to experience and complain about application performance issues. A large number of users complain that the server is overburdened with specific applications (especially multiple applications on the same server).

Specifically, capacity management is not simply a matter of ensuring that there is enough capacity for the business. This part is very easy to do. If you procure too much or configure your capacity too much, everyone can guarantee enough capacity. The key goal is efficiency and predictability.

This is to find an optimized balance of IT supply to ensure that the business needs are met at all times. This is to save costs while minimizing waste and risk. Therefore, effective capacity management must guarantee two things:

1. Efficiency (optimized capacity): Use the available capacity of each point without affecting the business.

2. Predictability (available capacity): No matter what time the business department needs, it is guaranteed to have available capacity and is always online.

Why is capacity management important?

Whether your data center environment is physical, virtual, or hybrid, capacity management is an increasingly important feature in any IT organization. Many companies are looking to implement a formalized capacity management model for three main reasons:

1. Cost savings

It is difficult to get budget approval and wait for a long new The hardware or infrastructure procurement cycle is tedious. In the past, once the IT department got the budget, they had too much or pre-purchased hardware to avoid the headaches of these administrations.

This hardware is often used after one month, one year or more of purchase. This hardware is idle there at a price. However, with the right capacity management tools and processes, you can make the right purchases and ensure that any new hardware is deployed and used immediately.

2. Service Availability

IT departments need to provide consistent, high-quality services to their business owners. This is difficult when capacity requirements are constantly changing and fluctuating accordingly. Without proper capacity management, IT risk reduces service availability and customer satisfaction. This is very expensive and may affect the survival of your business, especially if you are talking about mission-critical, external-facing applications.

3. Business Planning

Like business owners, IT departments need short-term and long-term plans. Creating this plan requires an understanding of historical capacity usage and predicting future capacity needs. Unless this is done systematically, you will lack historical perspectives and accurate predictions of future needs, especially in a dynamic virtual environment.

If capacity management is not done correctly or not at all, supply and demand will be out of balance, resulting in wasted resources or insufficient resources. Wasted resources, whether it is too early or too large, are expensive. However, resources are not enough to use, because it can affect business operations and can be felt.

Handling Capacity Planning on a Physical Environment

In a physical environment, capacity management is very straightforward and simple. In the past, capacity was driven by plans based on the needs of a single business. In the pattern of one application per server, the owner of this service knows exactly what capacity it has. This is very clear, easy to trace and silo. This server and its full capacity are owned by a single user or application.

Unfortunately, this resource silo has led to a basic dilemma: at the expense of efficiency and predictability. In a physical environment, efficiency is usually achieved when you make a short-term plan. If you want to be very efficient, you need to configure IT capacity based on your peak. However, when capacity demand unexpectedly exceeds the peak, you are at risk.

When you develop a long-term plan, predictability is achievable. If you want to mitigate any risks through multiple configurations, you will have unnecessary waste. This "excess" capacity is the capacity you can increase. Unfortunately, the physical environment often needs to be optimized for one goal or another: predictability or efficiency. If an environment is very efficient, it lacks the extra capacity that is fully predictable. Adding extra capacity (which is a common reaction) may preserve predictability, but it can lead to inefficiencies or waste.

Handling Capacity Planning on Virtual Machines

The trick to capacity planning is to understand the resources you have, the location of those resources, the resources required for each job, and how these resource requirements change. Tools like Iometer can be used to check network behavior for I/O performance, but one of the most common tools for Windows capacity planning is Microsoft's Assessment and Planning Toolkit, which is supported by both physical and virtual workloads. An administrator can run this tool to identify server resources and check for resource demand changes for each workload over time.

"Look at what this tool will suggest for virtualization hosts and what kinds of resources are available to them, which really allows you to run multiple scenarios," said Scott Gorcester, president of solution provider MooseLogic. Gorcester also said that even with VMware and Cisco's virtualization platform, the results of Microsoft tools are very accurate.

Although capacity planning often relies on short-term data, the real benefit of planning is to ensure that servers provide sufficient resources for workload demand growth over time. There are no simple formulas or methods to tell the administrator how to complete a long-term plan, but common sense should have a guiding role. It's just how far your planned future should be determined first by the amount and nature of the workload you are running. For example, an environment with multiple static workloads that is not expected to change may have to plan very little time. Conversely, companies that quickly add new workloads or more users should probably limit their planning to a few months later, and will only lose their usefulness because of inaccuracies.

Gorcester suggested that it is best to create additional capacity from the start, as this is often more economical than upgrading the server later. The idea is that virtualization facilitates IT tasks such as maintenance and machine installation, and you save almost the extra cost of "oversized" servers. "If you build a little more, it provides more stability, more usability, and some reserved capacity. These reserved capacity will stay there and wait for the processing to be busy. He said, you don't need to add too much cost. Get the best performance and the ability to easily add workloads or servers.

Don't forget to include business plans and consider the impact of the technology refresh cycle on capacity planning. For example, migration to virtualization can reduce the number of servers, but the result of buying more powerful servers to support additional virtual opportunities is to buy more expensive servers. The ability to redistribute old servers has also allowed many companies to maintain server cycles longer. This has caused the frequency of technical updates to slow down, which also makes it cheaper to purchase with fewer powerful servers.

Capacity Management Methods

There are many ways to manage capacity. However, in general, there are three different methods: the thumb law, internally developed solutions, and specially crafted tools.

Method 1: Thumb Law

The thumb law includes estimates based on past experience. For example, in the past, four virtual machines could be running on a single core. Therefore, the same speculation can be used for the future. The use of this approach in a dynamic environment clearly has obvious drawbacks, including inaccuracies and the inability to build a system around this method.

Method 2: In-house Developed Solutions

Internally developed solutions include scripts and forms. This is a more systematic approach than the law of thumb. In the case of scripts, this approach works for large companies with advanced IT skills. However, this approach quickly became a costly and time-consuming maintenance method and may still be inaccurate, especially when using rapidly changing infrastructure. In a virtualized environment, how virtual machines interact with the infrastructure layer is complex, so it's hard to get this done with a lot of expertise.

Method 3: Specialized Tools

Specially crafted tools are the preferred method for virtual environments because they speculate based on collecting and maintaining capacity information in a constantly changing environment. Perhaps most importantly, tools that are tightly integrated and familiar with the virtualization layer provide very reliable and real-time intelligence.

With the right tools and processes, IT administrators will have automated, real-time capacity intelligence to make day-to-day and strategic capacity management decisions in a virtual environment.

Copyright © Windows knowledge All Rights Reserved