Zero storage - small non-hard disk server favored

  
        We used to think that the server's design from scratch began to have the function of no disk subsystem, and some readers did not fully understand the key points - we will discuss this more clearly here.

When diskless servers emerge from their minds, we are not thinking of small businesses, but medium and large enterprises that deploy virtualization to their infrastructure. I am considering the data center on the verge of breaking through the limits of energy consumption and cooling. I am thinking that the local disk is actually useless. But this needs to take further time.

Virtualization Blades

The two manufacturers, HP and IBM, have proposed the concept of "virtualization blades". Neither blade has a traditional disk subsystem, but instead chooses flash or SSD-based boot devices and local storage. These practices look a lot like the topic I am exploring, but they are not exactly. The blade chassis required is designed to maximize the blade server workload for a full disk configuration, not for these low-power diskless blades.

Think of the blade chassis design from scratch to accommodate the entire diskless blade. Not only that, but these blades don't have the design of a drawing buffer: no disk, no video, only 10 Gigabit Ethernet and Fibre Channel ports, two low-voltage multi-core CPUs and as many dual-in-line memory module slots as possible . The energy and cooling required for the chassis will be different from those of traditional chassis, and the density of the blades will be as dense as possible. You will get a lot of energy and cooling advantages.

As far as the blade is concerned, I bet you can at least plug three such independent systems into a 1U rack. There are no associated disk subsystems, graphics buffers and other related ports and accessories. The back of each server requires only three to five ports (two 10 Gigabit Ethernet ports, one sequence, or two 10 Gigabits, two fibers). Channels, a sequence) and a lot of space for random access memory.

Each server has an optional 1.8-inch SSD slot or just an SD card slot, and nothing else. There is also a common redundant power supply that powers three servers (and possibly more). Systems such as this may introduce some hot issues, but only happen on an order of magnitude configuration -- I think this is possible when these servers are used as remote site servers or servers in medium-sized data centers.

SMB Server Infrastructure

Put 1 to 2 times the server in a 1U chassis, VMware ESX or ESXi and iSCSI storage arrays, you can have a complete SMB server Infrastructure (with 4 to 5 racks), no KVM required. How attractive is this?

Now let's talk about the legacy of the server. Most of today's virtualization infrastructure runs on 1U, 2U or blade systems. They configured local disks, graphics buffers, and all the components normally associated with single-instance servers, but they don't actually need them. There are very few opportunities to use a VMware ESX or ESXi server console. If they do, all are text-based, so there is no need for a drawing buffer. A series of consoles can perfectly fulfill all of these requirements and make it easier to access.

When it comes to a fully virtualized infrastructure, we try to accelerate with Humvees. The next step is to complete the real modifications and start a full virtualization.

Copyright © Windows knowledge All Rights Reserved