Using Beowulf to Transform Common PCs into Clusters

  

Using Beowulf to Transform Common PCs into Clusters

Now, Linux has a very important influence in the IT industry, except for its advantages of being free, efficient, and reliable. It is a very powerful tool for computer scientists and scientists who need to do a lot of computing. Since Donald Becker launched Beowulf cluster computing, Thomas Sterling, who works at NASA's Goddard Space Flight Center, has expanded Linux's use of high-performance parallel computing.

Now, Linux has a very important influence in the IT industry. Besides its advantages of being free, efficient, and reliable, it is still very important for computer scientists and scientists who need to perform a lot of calculations. Powerful tools. Since Donald Becker launched Beowulf cluster computing, Thomas Sterling, who works at NASA's Goddard Space Flight Center, has expanded Linux's use of high-performance parallel computing. Today, a large number of ordinary PC-based clusters appear in laboratories, industrial technology centers, universities, and even small colleges at all levels. If someone asks if a question about scientific computing can be solved with some loose computing resources? The answer is of course. We can use the Beowulf cluster, which can use a lot of ordinary PCs to make a cluster to solve the problems we face, and the price advantage of this cluster is unmatched by traditional parallel computers.

How to create a Beowulf cluster

In fact, anyone using an existing PC or an unused PC can build a parallel system of their own to practice parallel programming or parallel computing. In a computer lab, we can make a PC into a dual boot system (you can enter Windows or Linux as needed) for two purposes. In addition, for those machines that are no longer used, you can make a parallel computing system like Stone SouperComputer.

No two Beowulf clusters are identical. In fact, the hardware and software configuration of such a system is so flexible that it can be easily customized into different combinations. Although each Beowulf cluster system is different and its configuration is based on the needs of the application, there are some basic requirements that are the same. Let's take a look at some of the basic issues to consider when creating a cluster.

Creating a minimum requirement for a cluster

To create a cluster, each node should contain at least one Intel 486 CPU and motherboard. Although Intel 386 can work properly, its performance will not be worth the effort. The memory requirements depend on the needs of the target application, but each node requires at least 16MB of memory. Most applications require each node to have more than 32MB of memory. By using centralized disk space, nodes can boot from floppy disks, small-capacity hard disks, or network file systems. After booting, the node can access its own root partition in the file system through the network. This kind of access is generally implemented by NFS (Network File System). In an environment with high bandwidth and high performance servers, this configuration will work very well. For better performance, an operating system, swap partition should be installed on the local disk, and data can be obtained at each node. Each node should have at least 200MB of disk space for operating system components and for swap space, and should have 400MB or more of space reserved for program execution. Each node must contain at least one network card (preferably a high speed network card). Finally, each node requires a graphics card, a hard drive and a power supply. Keyboards and monitors are only required for system installation and configuration.


It should be noted that all hardware selected for use must have drivers or corresponding modules in Linux. Generally speaking, unless the hardware is very old, it is not a problem. For the primary node that needs to manage the entire cluster, it is best to install an X server for convenience. During the installation process, if there is a problem with a particular component or there is no driver, you can go to the forum for help.

Network Connections

If possible, each node is preferably on a separate LAN and has its own Hub. This ensures smooth network communication. The first or primary node in the cluster should have two network cards, one connected to the internal network and the other connected to the public network. This is especially useful for user logins and file transfers. In the internal network, you need to use an IP address that is not used on the Internet. In general, the easiest way is the A class 10.0.0.0 address, because these addresses are reserved for networks that do not have routes. In this example, the /etc/hosts file for each node looks like this:

10.0.0.1 node110.0.0.2 node210.0.0.3 node310.0.0.4 node4

The /etc/hosts.equiv file for each node should look like this:

node1node2node3node4.

A node number is 2, and the ifcfg-eth0 configuration file using Red Hat Linux is as follows :

DEVICE=eth0IPADDR=10.0.0.2NETMASK=255.0.0.0NETWORK=10.0.0.0BROADCAST=10.255.255.255ONBOOT=yes

In addition, we often need a DNS for those nodes. This is especially true for internal networks where names and addresses change frequently. DNS can run on the first node to provide name/address resolution for nodes on the internal network.

Local Storage

On the issue of loading an operating system, creating a Beowulf cluster requires some storage configuration decisions in advance. Because all the nodes need to be reinstalled once the installation is complete, it is important to consider it very carefully. While most Linux-based Beowulf clusters run Red Hat Linux distributions, virtually all Linux distributions support basic clustering. Red Hat's installation is very simple, we can use the CD or through the first node of the cluster to install (provided there is already a copy of the distribution on the node). In the actual use process, many people find that it is better to load the operating system to each node from the master node through FTP than to mount the root partition through NFS. This approach avoids unnecessary network communication and reserves bandwidth for the transfer of information while the application is running.

The Red Hat Linux runtime environment requires only about 100MB of disk space per node. However, in practice it is necessary to include a compilation and some other tools on each node. Therefore, in the configuration, each operating system requires approximately 175MB of disk space. Although some clusters have swap partitions configured on a common file system, using a dedicated swap partition on a local disk is a more efficient option. In general, the swap space of a node should be twice the size of memory, and when the memory is larger than 64MB, the swap space should be equal to the size of the memory. In practice, when the memory is 64MB to 128MB, we usually set the swap partition to 128MB. Therefore, if a node has 32MB of memory and there are 2 hard disks, then we should load the Linux system onto the main drive. Use another hard disk as swap space (64MB) and local run space (138MB).

Cluster Management

System administration and maintenance is a very tedious task, especially for large clusters. However, we can find some tools and scripts online to simplify these tasks. For example, a node must be synchronized with other nodes in time and system files (/etc/passwd, /etc/group, /etc/hosts, /etc/hosts.equiv, etc.), so a simple one can be timed by cron The executed script will be able to complete this synchronization process.

Once all the nodes have been loaded and configured, we can develop and design parallel applications to take full advantage of the computing power of the new system.

Developing Parallel Applications for Cluster Computing

Under Linux, we can use commercial compilers or free compilers. The GCC, g++, and FORTRAN (g77) compilers are included in most Linux distributions. The C and C++ compilers are already very good, and the FORTRAN compiler is constantly evolving. Commercial compilers are available from companies such as Absoft, Portland Group, and The Numerical Algorithms Group. Some commercial FORTRAN-90 compilers can automate parallel computing if configured properly. In general, developing parallel code requires the use of PVM (Parallel Virtual Machine), MPI (Information Transfer Interface), or other communication libraries between processors for clear information transfer. Both PVM and MPI are free, and information transfer between nodes can be achieved through simple library calls during the calculation process.

Of course, not all computing tasks are suitable for parallel computing. In general, in order to take full advantage of the advantages of parallel computing, it is usually necessary to do some development work for the task. Many scientific problems can be subdivided, that is, they can be broken down into relatively independent modules so that they can be processed at separate nodes. For example, image processing tasks can usually be subdivided so that each node can process a certain part of the image. When an image can be processed independently (such as processing this part of the image does not require other parts of the information), the effect is better.

For parallel computing, the most dangerous flaw is turning a computational problem into a communication problem (whether using existing parallel code or new development code). This problem generally occurs when the task is over-polished so that each node transfers data for synchronization to exceed the time the CPU calculates. In this case, using fewer nodes may result in more runtime and more efficient use of resources. That is to say, for different parallel applications, adjustments and optimizations are made based on the load and communication calculated by the local node.

The last thing to say is that when developing parallel algorithms, if the nodes of the cluster environment used are different, then this problem should be fully considered. In fact, when running a parallel application, the speed of the CPU between the nodes is very critical. Therefore, in a cluster with different configurations, the tasks are evenly distributed, so the faster CPU must wait for the speed. It is obviously unreasonable for a slower CPU to complete its own tasks. Therefore, designing an appropriate algorithm can handle this situation well. Of course, no matter what algorithm is used, the communication overload problem must be fully considered.

Parallel processing can be organized in many ways, but the master/Slave is organized in the most easy way to understand and write programs. In this mode, one node acts as the master, while the other as the Slave.Master node usually decides how to split the task and the transmission of the command information, while the Slave node is only responsible for processing the assigned tasks, and when the task is completed. Report to the master.

Summary

In fact, when developing parallel code, there is no strict regulation, but it should be carried out according to the actual situation. The prerequisite for optimizing hardware configuration and algorithms is to know the details of the application to be run. In a different cluster configuration of each node, load balancing in each node and communication in each node depends on the specific situation of the hardware. In an environment with fast communication speed, tasks can be divided more finely, and vice versa.

It should be said that "Beowulf Movement" will popularize parallel computing. Parallel code developed in the Beowulf system using standard messaging libraries can run directly on commercial-grade supercomputers without any changes. Therefore, the Beowulf cluster can be used as an entry point to get started and transition to the mainframe when needed. In addition, a cheap, versatile cluster means that parallel computer environments can be used for specific tasks, while large commercial supercomputers are too expensive to focus on a single application. Obviously, as more and more parallel environments are applied to real-world work, it will further promote its application in various fields.

Copyright © Windows knowledge All Rights Reserved