How to solve the storage problem in server virtualization?

  
                  

Server virtualization can reduce IT expenses and increase server utilization. However, because of the characteristics of virtualization, in order to support the growing virtual machines in the environment, it is necessary to expand the storage to meet the performance and capacity requirements. IT managers have found that the money saved by server virtualization is gradually being invested in storage purchases.

Server virtualization, due to virtual machine sprawl, virtual machine backup and disaster recovery software configuration issues, has caused many organizations to completely change the original data backup and disaster recovery strategy. EMC, Hitachi Data System, IBM, NetApp, and Dell are all committed to server virtualization storage issues, providing solutions including storage virtualization, deduplication, and automated thin provisioning.

Server virtualization storage issues arise in traditional physical storage technologies in data center virtualization environments. Part of the reason for the proliferation of virtual servers is that virtual servers can consume about 30% more disk space than physical servers. There may also be virtual machine "I/O blender" issues: traditional storage architectures are not able to effectively manage promiscuous random I/O generated by virtual machines. Virtual storage management in a virtualized environment is far more complex than traditional environments—managing virtual machines means managing storage space.

Solving Server Virtualization Storage Issues

As an IT manager, you have several options for solving such server virtualization storage problems. Let's start with some less practical scenarios. . One of them is to deploy virtual machines at a slower speed. You can run fewer virtual machines on each host, reducing the likelihood of an "I/O Mixer" problem. Another method is to provide extra storage, but it is expensive.

A better option is to choose smarter models and introduce technologies such as storage virtualization, deduplication and automated thin provisioning when purchasing storage devices. Adopting this strategy means applying new technologies and building partnerships with new manufacturers such as Vistar, DataCore and FalconStor.

Using Storage Virtualization as a Solution

Many analysts and storage providers recommend storage virtualization as a solution to server virtualization storage issues. Even if there are no problems, storage virtualization can reduce data center spending, increase business agility and become one of the key components of any private cloud.

Conceptually, storage virtualization is similar to server virtualization. Abstract physical storage systems to hide complex physical storage devices. Storage virtualization consolidates resources from multiple network storage devices into resource pools, externally equivalent to a single storage device, along with virtualized disks, blocks, tape systems, and file systems. One of the advantages of storage virtualization is that it helps storage administrators manage storage devices and improve the efficiency of tasks such as backup/restore and archiving.

The storage virtualization architecture maintains a mapping table of virtual disks and other physical storage. The virtual storage software layer (logical abstraction layer) is between the physical storage system and the running virtual server. When a virtual server needs to access data, the virtual storage abstraction layer provides a mapping between the virtual disk and the physical storage device and transfers data between the host and the physical storage.

As long as you understand server virtualization technology, the difference between storage virtualization is only what technology is used to implement it. The main confusion is the different ways that storage providers use to implement storage virtualization, either directly through storage controllers or through SAN applications. Similarly, some deployment storage virtualization places commands and data in-band together while others may separate commands from the data path (out-of-band).

Storage virtualization is implemented through a number of technologies, which can be software-based, host-based, application-based, or web-based. Host-based technology provides a layer of virtualization and acts as a separate storage drive partition for applications. Software-based technologies manage hardware infrastructure based on storage networks. Network-based technologies are similar to software-based technologies, but work at the network switching layer.

Storage virtualization technology also has some drawbacks. Implementing a host-based storage virtualization tool is actually a volume manager and has been circulating for many years. The volume manager on the server is used to configure multiple disks and manage them as a single resource, which can be split as needed when needed, but such a configuration needs to be configured on each server. This solution is best suited for small systems.

Based on software-based technology, each host only needs to query whether there is a storage unit available through the application software, and the software redirects the host requirements to the storage unit. Because software-based applications write block data and control information through the same link, there may be potential bottlenecks that affect the speed of host data transfer. To reduce latency, applications often need to maintain caches for read and write operations, which also increases the price of their applications.

Server Virtualization Storage Innovation: Automated Thin Provisioning and Deduplication

Two innovations in storage technology, automated thin provisioning and deduplication, are also reducing storage capacity in server virtualization environments The solution to the demand. These two innovations can be combined with storage virtualization to provide robust and reliable storage capacity control.

Thin provisioning allows storage to “go farther”, reducing the capacity that has been allocated but not used. Its function is to allocate data blocks on demand, rather than pre-allocating all capacity requirements. This approach can reduce almost all white space and help avoid underutilization. It can usually reduce disk overhead by 10% and avoid the situation of allocating a large amount of storage space to some independent servers, but it has not been used.

In many server deployment requirements, thin provisioning provides the storage space required by applications through a common storage resource pool. Under such conditions, thin provisioning can be integrated with storage virtualization.

Deduplication detects and removes duplicate data located on storage media or file systems as a whole. Detecting duplicate data can be done at the file, byte or block level. Deduplication technology replaces duplicate data with a simple copy by identifying the same data segment. For example, there is a copy of the same document in the file system. In 50 folders (files), the original file can be replaced by a separate copy and 49 links.

Deduplication can be applied in a server virtualization environment to reduce storage requirements. Each virtual server is contained in one file, and sometimes the file becomes very large. One of the features of a virtual server is that system administrators can stop virtual machines, copy and back up at certain times. It can be restarted later and resumed online. These backup files are stored somewhere on the file server and usually have duplicate data in the file. Without deduplication support, it is easy to make the storage space required for backups grow dramatically.

Changing the concept of purchasing storage devices

Even with storage virtualization, deduplication and thin provisioning can alleviate the growth in storage capacity, organizations may need to change their storage solution buying standards . For example, if the storage you purchased supports deduplication, you may no longer need to configure as much storage capacity as you originally planned. Support for automated thin provisioning, storage capacity utilization can be automatically increased and close to 100%, without the need for administrators to operate and maintain.

Before traditional storage purchases, you need to evaluate the storage capacity baseline required to meet the load, store the potential growth rate over three years, store expansion capabilities and resolve storage profiles, and develop related procurement contracts. With the advantages of storage virtualization and cloud computing, it will become increasingly impractical to purchase larger capacity traditional storage, especially if the budget is still the biggest limit for purchasing storage.

Here are some simple storage buying instructions:

Unless explicitly stated in the design, don't buy a storage solution that only solves a single problem. This approach will result in the purchase of a storage architecture that cannot be shared with other systems.

Focus on storage solutions that support multiple protocols and provide greater flexibility.

Consider the application/load range that the storage solution can support.

Learn about technologies and solutions that address storage issues, such as deduplication and automated thin provisioning.

·Understand storage management software and automation tools that can reduce system management costs.

Many organizations have implemented server virtualization in their internal environments and considered how to implement private clouds on existing storage hardware and servers. It is important that the storage budget is applied to the purchase of the right hardware or software. Don't focus only on low prices. Instead, starting with business issues, providing the most valuable storage solution to solve problems is king.

Copyright © Windows knowledge All Rights Reserved