How to build a solid server virtualization architecture?

  
                  

Few technologies like server virtualization technology are quickly becoming a fundamental part of the data center. That's because the basic selling point of this technology is too easy to understand: when you run multiple logical servers on a single physical server, the hardware utilization is greatly improved, so you can buy less physical servers, but can handle the same A batch of workloads. It sounds like a white money.

Of course, the details are more complicated. A hypervisor is a thin layer of software on which you deploy virtual servers. It is usually packaged into a complete software solution that brings some combination of licensing, support, and/or maintenance costs. It depends on which virtualization software you choose). You will most likely need to upgrade to a server processor that supports virtualization.

On the other hand, reducing the number of servers can indirectly save costs - reducing rental space, reducing cooling costs, and of course reducing power consumption.

More tempting than this is the inherent flexibility of virtualization. As workloads change, you can easily turn virtual servers on or off for flexible expansion and dynamically meet new application needs.

The road to deploying a virtualized infrastructure is also facing hidden difficulties. You need to overcome the initial costs and disrupt the business and other unfavorable factors, and can not leave unrealistic expectations. You also need to know how to proceed with deployment, minimize risk, and ensure that performance is at an acceptable level.

Reasons for server virtualization

It's easy to convince others to accept server virtualization technology. After all, who doesn't want to make the most of server hardware? In fact, this basic idea is too tempting, so be careful not to overwhelm. Make sure you take into account the costs of capital equipment, deployment, training and maintenance that you may face. As with many other new technologies, the cost-saving benefits of virtualization technology are often reflected.

Most virtualization deployments require new hardware, primarily because the hypervisor needs an updated processor that supports virtualization. So the best time to deploy virtualization is when you need to add servers to your existing infrastructure or replace aging hardware.

The efficiency of a more innovative server will help give reasons for using virtualization technology. First calculate the power consumption and cooling level required by the current infrastructure. (In an ideal situation, it should be calculated on a per-server basis, which is time consuming, but can yield extremely accurate numbers.) Then, check the same technical specifications of the hardware you are planning to purchase, and understand the power and cooling costs. How much can be saved in terms of aspects.

Plus you can handle as large workloads with fewer physical servers, and your proposed virtualization infrastructure can be very beneficial compared to your existing infrastructure. If the new hardware is powerful enough, you might be able to run many logical servers on each physical server.

Unfortunately, determining that a virtual server can be installed on a physical host has never been a precise science. However, some tools can help; some server integration tools allow you to specify the brand and model of the hardware you are currently planning and planning to use, and also monitor your existing infrastructure for a period of time.

With all this data in hand, you can make a report that accurately shows how many virtualization hosts, what type, and the expected ratio of virtual servers to physical hosts. Some reports even calculate estimated power consumption and cooling capacity for new infrastructure. Compare the choices offered by EMC VMware, Microsoft, and other vendors to get the most accurate data before launching a virtualization project.

But let's reiterate, don't overdo it. Everyone needs to realize that reducing the number of physical servers does not mean reducing the number of logical servers, nor does it mean reducing IT staff. In fact, it is often necessary to hire a qualified consultant to help plan any virtualization project. Although the basic concept of virtualization is simple, the planning, design, and implementation phases can be tricky without the knowledge and experience.

Getting started after training

It is also important to consider training existing staff. Virtualizing an existing IT infrastructure means changing the structural foundation of the entire computing platform. In a sense, you put a lot of eggs in a few baskets. When this infrastructure is put into use, it is critical that IT administrators are well versed in managing the infrastructure because virtualization poses many risks that must be avoided.

Whenever possible, make sure to train your staff before you fully implement virtualization. The vendor you choose should offer a variety of options for specific training, at least for online training courses.

In addition, take advantage of the evaluation period provided by many virtualization platforms. For example, VMware's enterprise framework can be downloaded, installed, and run for free for 60 days, during which time it is helpful for administrators to be familiar with the tools and features in the proposed environment. This kind of practical hand experience is nothing to replace.

However, don't make low-level mistakes that newcomers often make: turn the test platform used for training into a production platform that is actually used. When you first start using a production virtualization environment, be sure to cleanly install all the components instead of migrating from the training tools.

Another important thing is to make sure that software training is not just done. Hardware considerations are also important for the implementation of virtualization technologies, including the number of Ethernet interfaces, processor choices, amount of memory, local and shared storage, and more. It is also important for your administrator to be familiar with the day-to-day operations and capabilities of supporting tools such as storage area network (SAN) array management interfaces, Ethernet or Fibre Channel switches. Be aware that in a virtualized environment, an error affecting a port on a server affects all virtual servers running on that host.

Retiring old equipment

A major benefit of implementing a virtualization project is that it gives IT departments the opportunity to phase out old hardware and old frameworks. Right now, it's a good time to check the entire infrastructure and identify missing components that are no longer needed or can be incorporated into other tools or projects.

After you complete the planning phase of virtualization, you should pay close attention to any equipment that can be removed from the equipment room without much effort. This will simplify the migration process and reduce the number of servers that need to be migrated or re-established on a virtualized basis.

This is also a great opportunity to check the network requirements of the proposed solution. In any reasonably sized infrastructure, physical trunking with Ethernet trunking is often essential. With aggregation technology, virtual machines can connect to any converged network, not just to a Layer 2 network that is directly connected to the host. You can also switch hosts in real time between networks. This is a very easy way to greatly increase flexibility.

Are you planning to run any virtual server that needs to connect to a demilitarized zone (DMZ) network? If so, it's best to have interfaces on each host that specifically transport that traffic, but it's also possible to aggregate those connections. In general, you should physically isolate a trusted network from an untrusted network; adding another network interface to a host is the least expensive method.

Copyright © Windows knowledge All Rights Reserved