Technical Analysis: Seven Barriers to Increasing the Difficulties of Server Consolidation

  

The original intention of most enterprise users to deploy virtualization is to use it as a tool for hardware consolidation. Cloud computing, converged networks, flexible data environments, all of these goals are worth trying, but the top priority for enterprise users is to improve the performance and capacity of enterprise systems without increasing the budget.

This is why although the highest integration ratio of virtual environments can reach 30:1 or even 50:1, many companies have reached the limit of 15:1 integration. Obviously there is a gap in it. Therefore, we need to understand the main obstacles to improve the integration ratio, in order to find the crux of the problem and take more powerful measures to overcome the obstacles, and finally build a more professional and intelligent data center.

Limitations of Input/Output - Virtual environments can easily accommodate higher integration ratios, but other data centers may not be able to do this. Whether you run 10, 20 or 100 virtual machines on a physical server, all data must still pass through the same input/output channels. Techniques such as virtual input/output are designed to solve this problem, but it can only be implemented in the context of network integration and convergence. In the end, most enterprises will limit the integration ratio of hardware after these architectural problems are formed.

Storage Capacity - The same is true for storage. Although you have heard of "storage virtualization," there is currently no way to redesign the storage capacity of the data already stored. Best practice is to reduce the number of file copies scattered across the data center and improve storage performance by upgrading backup and archiving processes. But the more the number of problem virtual machines, the more data is generated, which requires more storage space and better system management.

Management /Spread - This is a somewhat contradictory proposition. If the goal of the enterprise application environment is to increase the number of virtual machines and reduce the number of physical servers, how can we endure the trouble of virtual machine sprawl? But this is about management, or the lack of measures to manage virtual machines for these manufacturing problems. Virtual machine configuration, use and elimination can create problems, and consuming important resources is not good for anyone unless there is a powerful management system to identify and ban redundant virtual machines. The sooner you deploy your management system in the virtual world, the more effective your integration efforts will be.

Economics - These issues are not only about technology but also about the economy. Saving money by consolidating hardware resources is a good thing, but you must adjust the increased spending on your network infrastructure, storage capacity, and management software. If the measures are right, all the investments will eventually lead to a more improved, more economical and easy-to-run IT infrastructure that will then achieve better performance. At the same time, this requires financial input, so it is not easy to demonstrate the corporate environment of the past two years.

Technology R&D/Cloud--I admit that this is a problem that I have not seen before, but it is indeed a real problem that cannot be avoided. In fact, from the initial launch of VMware virtualization products to Xen, Hyper-V, vSphere, desktop virtualization and today's cloud (public, private and hybrid), virtualization is growing so fast in the IT industry. It caused a lot of confusion, no one really understood the clues, and each development was related to each other. It is also understandable that some corporate executives will be able to push on a larger scale of hardware integration, and it is a matter of time before a cloud-based cloud architecture is introduced to the market at a lower cost.

Worry--and the very worrying thing is that increasing the density of virtual machines on existing servers can hurt system availability and service level agreements. It is not difficult for the manufacturer to claim that 100 virtual machines can run on the new system, but this is irresponsible for your data. The latest server designs are designed around low power consumption and high virtual machine utilization. They can do more to increase the density of virtual machines, but it may be consistent with the pace of the normal update cycle.

Institutional resistance - Finally, the immutability of laws and regulations requires more energy for business users to adapt and adjust. The same is true for technology. Concerns about security, reliability, and overall performance largely attribute virtualization to non-critical systems. Further integration is feasible in front-end office systems and user-facing systems, but it still takes some effort to overcome the resistance of change.

Of course, one factor that crosses all of these obstacles is cost. Virtualized environments are cheaper to build and maintain than existing hardware/software infrastructure. The pursuit of efficiency and low cost will ultimately lead to the spread of virtualization. All of these have begun to show profit growth points for companies and are delivered to users at a lower cost. This trend will be like winning the favor of users quickly.

Copyright © Windows knowledge All Rights Reserved