Windows Server 2012 Virtualization Combat: Storage (1)

  
        

One way we can see everywhere in the computer world is abstraction. In 1946, von Neumann proposed the basic structure of a computer: calculator, memory and I/O devices. This may be the most important abstraction of the new thing of computer, which directly affects the development of computer hardware and software in the next few decades. Each of us has a different understanding of the computer's calculator, memory and I /O devices, you may not know all the computing chips, do not understand the various storage devices and input and output devices, but this will not hinder you from distinguishing They come. The various parts of the computer system may be indispensable, but they must be replaceable. This is the power of abstraction. Without abstraction, we might have to develop different operating systems for each set of hardware. It would be unthinkable to develop different applications for each operating system.

Of course, things have evolved to this day, we still think that there are too many types of software and hardware systems, we need a new level of abstraction. Virtualization is an abstract process, an abstract process of hardware and software resources. One of the purposes of virtualization is to make the upper layer not need to perceive the differences between the lower layers, and provide a unified interface for the upper layer to use the underlying resources.

Ideally, hardware virtualization may be implemented without any restrictions on hardware. Applications can take full advantage of hardware resources without perceiving differences in hardware devices such as computing, storage, and networking. But in reality, we still need to consider more or less the limitations of the hardware. Next, let's consider the most critical role —— storage in hardware virtualization.

First, the status quo of storage development

For the storage of everyone will have an intuitive understanding, from tape, floppy disk, CD to hard disk. According to von Neumann's theory of computer architecture, the concept of storage is as simple as literal, any device that can be used to store data. Just like this abstract independent concept, storage devices can be relatively independent in computer architecture. . With the development of the network, storage is no longer limited to the inside of the computer casing, and network storage is becoming the mainstream in the server field.

The most important thing in the storage device is the hard disk. The pursuit of the read/write speed, capacity and quality of the hard disk has changed the hard disk technology. There are many ATA (IDE), SATA, SCSI, SAS, FC and Infiniband interfaces from the interface standard with the motherboard. They have certain differences in interfaces, transmission media and protocols. The ATA interface is often connected to familiar IDE devices; SATA (Serial ATA) is Serial ATA; SAS (Serial Attached SCSI) is Serial SCSI; SATA and SAS are twin brothers, SATA hard drives can be connected to SAS interfaces, but not compatible FC (Fiber Channel) is a Fibre Channel. Fibre Channel does not necessarily use optical fibers. Copper cables can also be used. Need to know more about the technical details of each interface, please google.

With the development of network storage, storage modes such as DAS (Direct-Attached Storage), NAS (Network Attached Storage), and SAN (Storage Area Network) are distinguished. The figure below is a comparison between them, where each dashed box is represented as a relatively independent whole, the left side of the arrow describes an interface or connection, and the right side describes the technical solution for implementing storage. What is shown in the figure is just the simplest organization or connection method in each mode. The way the File System is connected to the Storage in DAS is not limited to the various interfaces and cables inside the machine (such as the computer connected to the built-in SAS hard disk), or it can be an external interface and cable (such as connecting through an external SAS cable). It becomes complicated, and NAS or SAN as a whole can easily become part of the DAS. But the biggest difference between DAS and the other is that DAS does not require network support. The biggest difference between NAS and SAN initially is that NAS is file-based storage, while SAN is block-based storage. NAS storage is more of a stand-alone file server, but the SAN is more like a disk, so the SAN can become a lower-level part of the NAS network.


Finally, distributed storage, in simple terms, distributed storage leverages inexpensive hardware and implements abstraction of storage on top of the operating system. Distributed storage is temporarily unrelated to this article and will be discussed in the future.

Second, the storage characteristics of the Windows Server 2012 operating system

Next, let's take a look at several important improvements to Windows Server 2012 that support storage virtualization. New and improved storage features have been added in Windows Server 2012 to support storage virtualization, the most notable of which are iSCSI target servers, SMB 3.0 and storage. Windows Server 2012 and its clusters can easily implement current mainstream storage solutions for test or production environments, and Microsoft has a unique storage solution for file server and file server clusters (SMB 3.0 and storage space technology).

1, iSCSI target server

iSCSI (Internet Small Computer System Interface) is the Internet small computer system interface. iSCSI uses an Ethernet connection between the server and the storage system, and encapsulates SCSI commands and data based on the TCP/IP protocol to create an IP SAN. iSCSI and IP SAN should be the most cost-effective storage solution available today.

In Windows Server 2012, the iSCSI Target Server becomes a server role built into the file and storage service, integrated into the Server Manager, eliminating the need for additional downloads and installations ( Previous Server versions required downloading a standalone installation package for installation, so deployment and updates became easier. After looking up the documentation, we found that the iSCSI target server provides the following services:

  • Network and diskless boot: Hundreds of diskless can be quickly deployed by using a network adapter or software loader that supports booting server. With a differential virtual disk, you can save up to 90% of your operating system image storage. This is useful for large deployments of the same operating system image, such as deploying a large machine room or deploying a server in a large-scale cluster.
  • Server Application Storage: Some applications require block storage (such as Hyper-V and Exchange Server). The iSCSI target server provides persistently available block storage for these applications. Because the storage is remotely accessible, you can also merge block storage at the center or branch location. This is the most important feature of iSCSI.
  • Heterogeneous Storage: The iSCSI target server supports non-Windows iSCSI initiators to be able to be stored on Windows Servers that are shared in a mixed software environment.
  • Development, Test, Demonstration, and Lab Environment: When the iSCSI Target Server role service is enabled, it turns any Windows Server into a block storage device that is accessible over the network. Storage arrays are generally very expensive, and in a test environment we can use a Windows Server computer that deploys the iSCSI target server role to act as such a storage device. This feature is very useful. If you want to do virtualization testing but don't have a separate storage array, you can use the iSCSI target server service to make any Windows Server server a storage array.

    It can be seen from the above that if you do not consider the high-performance iSCSI target server production environment is still useful, it is an indispensable helper in the test environment. In addition, the iSCSI target server can be configured as a clustering role for the Windows Server 2012 failover cluster, and high performance can be achieved by configuring MPIO. It is no different here to initiate a connection to an iSCSI target server and initiate a connection to another iSCSI device.

    2, File Server /SMB3.0

    In Windows Server 2012, the file server is another important server role. There are two optional protocols for accessing the file server: NFS (Network File System) and SMB (Server Message Block) /CIFS (Common Internet File System). When you configure the file server, you will be asked to make a choice. Let's take a look at their differences:

  • NFS was first developed by Sun and is the most common network file sharing protocol. NFS allows systems to share their directories and files to other systems on the network. Users and applications can access files on remote systems as if they were local files. NFS is more commonly used on Unix or Unix-like systems, and of course Windows supports it.
  • SMB originated from IBM, but Microsoft subsequently provided support and improvements. Initially SMB was based on NetBIOS to establish a file sharing protocol, but to extend SMB to the Internet and get rid of NetBIOS reliance, Microsoft The SMB protocol is collated and renamed to CIFS. The core of the Windows network environment is SMB/CIFS. Many Microsoft network applications can be based on the SMB/CIFS protocol, including file sharing between Windows systems. To provide file sharing services for Windows clients on Unix-like systems, you need to implement SMB/CIFS protocol, such as Samba.
  • Copyright © Windows knowledge All Rights Reserved