User Tools

Site Tools


mosix

OpenMosix Cluster

Not more available, please refer to the Batch queuing System

What is a cluster

To perform all the crunch number simulations, users of I.P.G. can use the OpenMosix Cluster, a group of High Performance computers that are managed as one big computer.
From the point of view of a user, OpenMosix cluster is just little different from how his/hers workstation works. The important thing to remember is that the cluster is not a computer with the biggest and powerful CPU you can imagine, but it's just a computer with a lot of standard CPUs (26 in our actual configuration).

What this means ? That you can't launch just one program and hope that it's execution will be faster that on your workstation (well, in some cases it is). In order to take full advantage of the power of all the CPUs of the cluster, you need to parallelize your programs/simulations so you can launch many instances of the same program at the same times, processing different ranges of number. This way your simulations is solved in less time.

Example ?

As example, you can think to a program that has to process the range of number from 0 to 1000 (for whatever reason). You can generate a program that do exactly this, but in this case you can use just 1 CPU (1 CPU = 1 Program running at time), this program is good for your monoprocessor workstation. If instead you subdivide the range of number to process in small parts (0-99, 100-199, 200-299,…) you can use the same program, but you can launch it 10 times, use 10 CPUs and solve the problem in 1/10th of the time needed for the big loop (ok, the time needed is slightly more, but you understood the big idea, I hope).

HowTo Use OpenMosix

If you look at this OpenMosix schema schema you can see that the cluster “live” in a different world, respect your Network. You can't connect directly to all the nodes of the cluster, but you have access just to 2 machines, thor.epfl.ch and sif.epfl.ch. This is not a problem, you don't need to connect to all nodes of the cluster, in order to use it. It's sufficient that you launch your program on one node and the system automatically move your process on the fastest or on the less loaded node of the cluster.
Differently from others nodes of the cluster, thor and sif can access the I.P.G. network, thus the file server, where your homedir is stored.
You must connect to these nodes using an ssh session in order to use the cluster, and it's better if you redirect the output of your program (if any) to a file, so you don't lose the results if the terminal window is closed.

Input/Output

Ok, the output of the programs you launched is printed on the screen or in a file, if you redirect the output stream, but what happens, if the multiple instances of the program you launched needs access to a file in your homedir for read or write? This means that only thor and sif can execute the program because they can “see” your homedir ?

Absolutely not. Your programs will be executed on all nodes disponible, just the operation of read/write are performed by thor or sif. When you launch the programs from one or the other, you automatically instruct the system what node has access to your homedir. In our case, you can launch the program only from thor or sif, the direct access to the other nodes is precluded.

Control of the Cluster

In order to monitor the situation of the cluster's nodes, you can use differents programs that can reveal the current load of the nodes or the operation of the singles nodes.

mosmon

If you launch mosmon from a terminal window, you can see a graphical representation of the current load of the nodes and also if some nodes are out of order (there are). this program has different views you can use in order to better monitor the situation. use the online help to find the more important options.

mtop

As the standard top program, mtop display the load of the servers and the cpu power assigned to every process. Differently from the standard top program, mtop display the load and the processes running on all nodes of the cluster.

mosctl

mosctl is used mainly for administration purpouses. Administrators can use it to change the configuration of the cluster without stop (hopefully) the work of the nodes. The system automatically adapt to the new configuration.

Some Numbers

The I.P.G. cluster use 12 SMP computers for a total of 28 CPUs. The speed of the Nodes varies from 1 GHz of the older models, to 3.06 GHz of the newest. In all cases the CPUs are of 32 bit type. As the Cpus, even the amount of Memory installed is different among the nodes. The two frontend nodes thor and sif have 6 Gbyte of Ram each, but the internal nodes have less RAM memory. The table below explain the detail and show the “openmosix speed index”. This index is just a number used to see the relative speed of the nodes respect a CPU Intel Pentium III @ 1 GHz.


Architecture Data bits CPU # Cpu Freq. GHz Gbyte RAM Mosix Index Name
x86 32 Xeon 2 3.06 6 30000 thor/mosix01
x86 32 Xeon 2 3.06 6 30000 sif/mosix02
x86 32 Xeon HT 4 3.06 4 36118 mosix03
mosix04
x86 32 Xeon 2 3.06 2 45986 mosix05
x86 32 Pentium III 2 1 1 15049 mosix06
x86 32 Xeon HT 4 3.06 2 45986 mosix07
x86 32 Pentium III 2 1 1 15049 mosix08
mosix09
x86 32 Xeon 2 1.8 1 26743 mosix10
x86 32 Xeon 2 2.8 1 42039 mosix11
x86 32 Xeon 2 1.8 1 26743 mosix12
x86 32 Xeon HT 4 3.06 2 46085 mosix13
x86 32 Xeon HT 4 3.06 2 46085 mosix14




<note> Better to know
In the INTEL (the CPUs constructor) x86 Architecture with 32 Data bits line, the CPU has access to a maximum 4 Gbyte of RAM (the Cpu can address only 2^32 bytes). In the implementation of the Architecture, these 4 Gbyte are subdivided in two parts: 2 Gbyte are disponible for the program and 2 Gbyte for the Cpu. This means that even on thor and sif where the RAM installed is 6 Gbyte every program launched can use at maximum 2 Gbyte of Ram for his work. If you need to address more Ram, you have to use lthcserv6.epfl.ch or lthcserv7.epfl.ch. These 2 servers have a x86 Architecture at 64 bit, both in hardware and software. 64 bit means that the Cpus of these Servers can address 2^64 bit of memory. Currently these servers have 16 Gbyte each of Ram installed. </note>

mosix.txt · Last modified: 2008/05/29 11:31 by damir