Load balancing cluster technology

  
Introduction

present, both in the enterprise network, campus network or wide area network such as the Internet, the volume of business development are beyond the past the most optimistic estimates, the Internet boom surging, endless stream of new applications Even if the network is built according to the optimal configuration at the time, it will soon feel overwhelmed. In particular, the core part of each network, its data traffic and computational intensity, so that a single device can not afford, and how to achieve a reasonable amount of traffic between multiple network devices to achieve the same function, so that it does not appear When the equipment is too busy and other equipments are not fully utilized, it becomes a problem, and the load balancing mechanism has emerged.

load balancing built on top of existing network infrastructure, which provides a cheap and effective way to extend the server bandwidth and increase throughput, enhance network data capacity, increase network flexibility and availability. It mainly accomplishes the following tasks: solving network congestion problems, providing near-service services, achieving geographical independence, providing users with better access quality, improving server response speed, improving server and other resource utilization efficiency, and avoiding key parts of the network. Single point failure.

defined

In fact, the load balancing is not "balanced" in the traditional sense, in general, it is just possible congestion in one place more than one place to share the load. If you change it to "load sharing," it might be better to understand. To put it bluntly, the role of load balancing in the network is like the rotation of the day system, the task is assigned to everyone to complete, so as not to let a person exhausted. However, the equilibrium in this sense is generally static, that is, a predetermined "rotational" strategy.

and turn the system on duty is different, dynamic load balancing through a number of analysis tools in real-time data packet, master data traffic conditions in the network, the rational allocation of tasks to go out. The structure is divided into local load balancing and geographical load balancing (global load balancing). The former refers to load balancing of local server clusters, and the latter refers to pairs of different geographical locations and different networks. Load balancing between server clusters.

each service server cluster node to run a separate copy of the desired server program, such as the Web, FTP, Telnet, or e-mail server program. For some services, such as those running on a web server, one copy of the program runs on all hosts in the cluster, and network load balancing distributes the workload among those hosts. For other services (such as e-mail), only one host handles the workload for which network load balancing allows network traffic to flow to one host and move traffic to other hosts in the event of a host failure.

load balancing technology

structure on the existing network architecture, load balancing provides a cheap and effective way to extend the bandwidth and servers increase throughput, enhance network data processing capability, improved Network flexibility and availability. It is mainly the following tasks:

◆ solve network congestion problems, the nearest service provided to achieve location independence
◆ to provide users with better access to quality
◆ to improve server responsiveness
◆ servers and other resources to improve the utilization efficiency of
◆ key parts of the network to avoid single point of failure load balancing in a broad sense

can either set up a dedicated gateway, load balancer, or through some special software to Implemented with the agreement. For a load balancing application of a network, starting from different levels of the network, specific analysis is performed according to the network bottleneck. From the client application to the vertical analysis of the starting point, referring to the layered model of OSI, we divide the implementation of load balancing technology into client load balancing technology, application server technology, high-level protocol exchange, and network access protocol exchange.


load balancing level



◆ client-based load balancing

this model refers to the network running on the client's specific The program collects the running parameters of the server group periodically or irregularly: CPU usage, disk IO, memory and other dynamic information, and then according to a selection strategy, find the best server that can provide the service, the local application Request to send it to it. If the load information collection program finds that the server has failed, then find another alternative server as the service choice. The whole process is completely transparent to the application, and all work is handled at runtime. So this is also a dynamic load balancing technology.

but there are universal problems of this technology. Because each client must install this special collection program; and, in order to ensure the transparent operation of the application layer, it is necessary to modify each application, through the dynamic link library or embedded method, the client's access request can pass The acquisition process is sent to the server again, in the process of redirection. For each application, the code is almost redeveloped and the workload is relatively large.

Therefore, this technique only apply to special applications, such as in the implementation of certain proprietary tasks, more distributed computing power needed for application developers do not have too many demands. In addition, in the JAVA architecture model, this mode is often used to implement distributed load balancing. Because Java applications are based on virtual machines, an intermediate layer can be designed between the application layer and the virtual machine to handle load balancing. Load balancing



◆ application server if the client load balancing intermediate layers of the stent to a certain platform, forming a three-layer structure, then the client application may not require special modifications made, The request is evenly balanced to the corresponding service node through the middle layer application server. A more common implementation is the reverse proxy technology. With a reverse proxy server, the request can be evenly forwarded to multiple servers, or the cached data can be directly returned to the client. This acceleration mode can improve the access speed of static web pages to a certain extent, thereby achieving load balancing.

using a reverse proxy benefit is that you can combine high-speed load balancing and caching proxy server, provide a useful performance. However, it also has some problems. First, it is necessary to develop a reverse proxy server for each service. This is not an easy task.

Although the reverse proxy server itself can achieve high efficiency, but for every agent, proxy servers must maintain two connections, an external connection, a pair connections, so for particularly high connection The request, the load of the proxy server is also very large. The reverse proxy is capable of executing a load balancing policy optimized for the application protocol, accessing only the most idle internal servers at a time to provide services. However, as the number of concurrent connections increases, the load on the proxy server itself becomes very large. Finally, the reverse proxy server itself becomes a bottleneck for the service. Scalable

◆ domain name system-based load balancing

NCSA's Web is the first to use web polling system dynamic DNS technology. The same name is configured for multiple addresses in the DNS, so the client querying the name will get one of the addresses, so that different clients access different servers to achieve load balancing. This technology is used on many well-known web sites: including early yahoo sites, 163, etc. Dynamic DNS polling is simple to implement and requires no complicated configuration and management. Generally, Unix-like systems with bind 8.2 or higher can run, so they are widely used.

DNS load balancing is a simple and effective method, but there are many problems.

first domain name server can not know whether the service node effective if the service node failure, the system still will name more than domain name resolution to the node, causing the user to access failure.

Secondly, because the data refresh time DNS TTL (Time to LIVE) mark, once more than the TTL, the other DNS server and the server interaction is required in order to regain the address data, it is possible to get a different IP address . Therefore, in order to make the address can be randomly allocated, the TTL should be as short as possible, and the DNS server in different places can update the corresponding address to obtain the address randomly. However, setting the TTL too short will increase DNS traffic and cause additional network problems.

Finally, it can not distinguish the difference between a server and does not reflect the current operating status of the server. When using DNS load balancing, you must try to ensure that different client computers can get different addresses evenly. For example, user A may only browse a few web pages, and user B may perform a large number of downloads. Since the domain name system does not have a suitable load policy, it is simply a round-robin equalization, and it is easy to send user A's request to a lightly loaded site. And send B's request to the site where the load is already heavy. Therefore, dynamic DNS polling is not ideal for dynamic balancing.

◆ high-level exchange of technology agreement

addition to the several load balancing methods, there is agreement within the technical ability to support load balancing, that URL exchange or seven exchange offers A high-level control method for access traffic. The Web content exchange technology examines all HTTP headers and performs load balancing decisions based on the information in the headers. For example, based on this information, it is possible to determine how to provide services for personal homepages and image data, such as redirection capabilities in the HTTP protocol. The highest level

HTTP runs on TCP connection. The client connects directly to the server via a constant TCP port number 80 service and then sends an HTTP request to the server over the TCP connection. Protocol exchange controls the load according to the content policy, not the TCP port number, so it does not cause the retention of access traffic.

take incoming requests to multiple servers, and therefore, it can only establish TCP connections due to the load balancing device, and the HTTP request to determine how to perform load balancing by post. When the hit rate of a website reaches hundreds or even thousands of times per second, the TCP connection, the analysis of the HTTP header information, and the delay of the process have become very important, and every effort must be made to improve the performance of these parts.

has a lot of useful information for load balancing and HTTP request header. We can get the URL and webpage requested by the client from this information. With this information, the load balancing device can direct all image requests to an image server, or call the CGI program according to the database query content of the URL to guide the request. Go to a dedicated high performance database server.

If the network administrator familiar with the content switching technology, he can use Web content based on cookie HTTP header field switching technology to improve services for specific customers, if found some rules from the HTTP request, you can also take advantage of It makes various decisions. In addition to the problem of TCP connection tables, how to find the appropriate HTTP header information and make load balancing decisions is an important issue affecting the performance of Web content exchange technology. If the web server has been optimized for special features such as image services, SSL conversations, database transaction services, then this level of flow control will improve network performance.

◆ network access protocol exchange

large networks are generally formed by a large number of special technical equipment, such as including firewalls, routers, 3,4-layer switches, load balancing device, the buffer Servers, web servers, etc. How to organically combine these technical devices is a key issue that directly affects network performance. Many switches now provide Layer 4 switching, providing a consistent IP address and mapping to multiple internal IP addresses. For each TCP and UDP connection request, dynamically select an internal based on its port number and according to the policy. Address, forward the packet to this address.
Copyright © Windows knowledge All Rights Reserved