Resolving data problems Server and storage management

  

Mentioned the management of IT systems, I believe that the first words in most people's minds are ... "network management" - network administrators. However, the management of IT systems is by no means limited to network management, and the management of servers and storage is an integral part.

We know that the concept of "network management" is well known and is not unrelated to the promotion of Cisco's network certification. In the field of storage, Gartner, a research company six or seven years ago, also proposed a "storage administrator" corresponding to the network management, but he did not persist. Now, someone goes one step further and thinks that companies need “storage managers”... really need them? If so, what are the responsibilities of this location?

Differences in server, network and storage management

In an enterprise, the CIO (CIO) is usually the highest responsibility for the company's IT hardware and software assets and their operations. people. Depending on the size of the enterprise and the size of the data center business, there may be a System Administrator (engineer)/engineer under the CIO--mainly responsible for the server hardware and operating system; network administrator--responsible for the company/data center internal LAN and external WAN connection topology, including switches, directors, routers, and cable deployment planning; software engineers - which may also be divided into application software (development, maintenance) engineers and database engineers (DBA) ... < Br>

From a hardware perspective, the hardware in the data center, in addition to the pre-requisite infrastructure for power and cooling operations, is mainly three major categories of servers, networks, and storage devices. For the network (here mainly Ethernet, not including the storage network), Cisco dominates the market, followed by Juniper, BLADE (acquired by IBM), Brocade (formerly Foundry) and Huawei. . Therefore, network administrators need to be familiar with the products of these companies that are used.

When it comes to servers, in addition to some of the key applications running on IBMAIX, HP-UX, Oracle/SunSolaris (SPARC) systems, RISC architecture minicomputers and mainframes, more mainstream applications are running on x86-based The Windows/Linux platform is based on Intel Xeon or AMDOpteron CPU.

Companies such as IBM, Hewlett-Packard, and Dell have all worked hard on server management (software and hardware), and the focus of this article is on discussing storage. After entering the era of virtualization, there have been many small vendors that have introduced management software for data center virtualization environments. Virtual machines have indeed added management difficulties to the original physical servers. At the same time, there is also the spread of data (capacity) and higher performance requirements for storage devices.

The difference between hardware and software for different enterprise storage vendors is not as simple as that between standard x86 servers. First of all, in addition to Intel/AMD x86 processors, there are quite a few products that use RISC architectures such as PowerPC and even Power (such as IBM DS8000 series high-end disk arrays). Some appearances are close to the storage server, and some use the chassis structure design of the SBB specification, but the design of the controller board does not have a certain rule.

Similar software, the simplest may be the use of NAS (Gateway) products of the WindowsStorageServer system, but these are definitely not mainstream, and some manufacturers are greatly streamlined and optimized on the Windows (or even XP) kernel. And more are based on a variety of different Linux versions to develop. Therefore, the management software interfaces of different brands and even different series of storage systems are different, although they are generally standard in the storage communication protocol of the connection server. What comes from it is the large differences and characteristics between the storage products of various vendors. From the hardware, they should generate more gross margins than the PC and server businesses, but the cost of R&D will increase accordingly. This has created opportunities for some relatively small vendors, such as 3PAR, which was acquired by HP last year, Compellent, which was acquired by Dell, and Isilon, which was included in EMC. Of course, companies that only do storage business reach a relatively large scale, just like EMC. Like NetApp.

So we will also discuss some of the industry-recognized standardization techniques in this article. You may be able to provide support sooner or later, but the way different vendors implement them at the bottom may be quite different.

So, do you need someone to take charge of storage? I think this depends on the size of enterprise IT, the number of storage devices (disk arrays, tape systems, etc.), and the amount of data that needs to be managed. In some cases, system administrators and network administrators may share the work of storage management. But if a company uses a large number of different brands, different grades of storage systems, tape backup devices, Fibre Channel switches, and data protection software, having a dedicated storage administrator (or storage manager) still has some benefits. . After all, you don't want a small amount of storage to be viewed by the vendor/service provider's engineers. Storage administrators can interface with them and increase the efficiency of enterprise storage hardware and software, thereby saving the company's IT costs.

Storage managers, storage administrators, may involve the following aspects of work.

First, storage system related management: including a variety of DAS, SAN, NAS storage systems (disk arrays) and drives used inside, as well as storage virtualization devices.

For RAID arrays, it was thought that a few years ago, as long as they were able to work properly, it was good to monitor their status through basic management functions. However, today's storage system software is becoming more and more feature-rich, and before using these to improve your storage efficiency, we need to go through an understanding--analysis/evaluation--pre-deployment/testing process.

Thin Provisioning

ThinProvisioning works by pre-allocating a virtual logical volume size when creating a "thin" volume, but only in actual writes The physical space is allocated to this volume when the data is available. In this way, we can easily create multiple “thin volumes” with a total logical capacity exceeding the physical disk space, without having to “pay” for the amount of data that may be reached in the future. When the data generated by the application does need to be increased, we can also Flexibly adjust the size of the volume online. While enjoying the benefits of thin provisioning, we also need to put some management effort into it, or be familiar with its rules.

Automated tiered storage

Automated tiered storage technology reduces access frequency by moving frequently accessed "hot" data to high-speed and expensive SSD solid-state drives or 15KRPM mechanical hard drives The "cold" data is stored in a large/cheap 7200RPM nearline driver to improve storage efficiency, improve performance and reduce unit cost. If automatic tiered storage is a great help for people, there is no doubt about this. But at the same time we have to invest some energy in the planning, implementation and post-management/monitoring process.

Storage Virtualization

We discussed the value of storage virtualization for users in the interview "SVC in the eyes of users: Shanda Online Storage CTO Zhu Jing talks about storage virtualization." And why Shanghai Shengda chose IBMSVC online. In general, the storage virtualization device is a layer in the middle of the storage network. The storage system of the back-end connection is virtualized and centrally managed, and the thin provisioning, snapshot/copy/mirror protection can be uniformly performed on it. There are also features such as tiered storage. Deploying storage virtualization involves changes to the original storage network, such as port assignments and connections for Fibre Channel switches. If the plan is unreasonable, new bottlenecks may be formed at the ISL (inter-switch interconnect). Chris Saul, IBM's SVC marketing manager, advises users to first perform system planning exercises before deploying SVC. The workload and staff time investment brought by storage virtualization is also quite a lot. Storage virtualization devices often provide disaster recovery capabilities, and that's what we'll talk about next.

Snapshots, Replication, and Mirroring

Today, the basic data protection features of snapshots, replication, and mirroring are almost standard on high-, medium-, and low-end enterprise storage systems—for example. IBM's entry-level SAN array SystemStorage DS3500 released last year.

The concept of snapshots is not unfamiliar to everyone. It is a simple local data protection method, mainly used to deal with logic errors within a period of time (such as a few minutes to a month). It is not equivalent to backup, because the protected data content is still on this storage device, we can cut back to the snapshot point at a certain time when needed, or delete the snapshot that is no longer needed to free up disk space.

Snapshots can also be used as a basis for replication between disk arrays, such as the optional FlashCopy for IBM storage systems, which combines FlashCopy snapshots with the capabilities of Tivoli Storage Manager backup software. The so-called synchronous mirroring is that the contents stored on the two storages are exactly the same. If the purpose of local mirroring is to further avoid single points of failure (although components of the storage system's controllers, drives, and power supplies are already redundant), then remote mirroring is for disaster tolerance or is as long as IBMSVC and EMCVPLEX. Live migration from VMware server virtual machines.

The mirroring function has high requirements on the bandwidth and delay of the storage network. If it is not reached, it will seriously slow down the read and write performance of the protected system. With limited network bandwidth available, we can also protect data with asynchronous replication.

Copyright © Windows knowledge All Rights Reserved