User Tools

Site Tools


slurm-dummies

S.L.U.R.M.

Workload Manager

for Dummies

1st things you need to know:
Using a slurm script is like if you're typing the commands from a shell. Therefore you must include in the script all the commands that you would use on the shell before/after running your program (if after the login you need to change directory to launch the job, you'll need to do it even from the batch script).

Every instruction line for the queue manager start with #SBATCH, so
#SBATCH …… : this is a directive for the cluster
##SBATCH ….. : this is a comment
# SBATCH…… : this is a comment

The mandatory lines

directives that you must always include in the scripts are:

  1. Your email address. the official epfl address or another, but valid (worldwide), email address. This address mus be always present, no matter if you instruct the system to send or not the email messages.
  2. How much time your job must run (if the job runs over this limit the cluster manager will kill it). the minimum is 1 minute and there's no maximum limit.
  3. How much memory (RAM) your job will use. Please remember that if your job use more memory than the limit you put here, then the cluster manager will kill the job. the minimum is 512 Mbyte, currently (as for Feb 2020) the maximum is 250 Gbyte.
  4. How many nodes (computers) you're going to use with your script.
  5. How many cores/cpu must be reserved for your job. If you don't include this parameter only one core/cpu will be assigned to your job and you cannot run more than a single threaded job.
  6. the name of the queue/partition you want to use: currently only slurm-cluster, slurm-gpu and slurm-ws are available.

partitions (a.k.a. queues)

If you used other types of cluster management, you will already known the term queue to identify the type of nodes/jobs you want to use inside the clusters. in S.L.U.R.M. notation, queues are called partitions. The two terms are used to indicate the same entity and usage. The 'partitions' slurm-cluster, slurm-gpu and slurm-ws refer then to what kind of computer you want to use:

  1. slurm-cluster: this includes all the real nodes dedicated to do just number crunching, most of the time you want to use this queue/partition.
  2. slurm-gpu: this includes computers that have a gpu (nvidia, mostly) that can be used for HPC.
  3. slurm-ws: this includes all the workstations that are sitting under your desks, programs that run a very shor time (1 hour top) can take advantage of the workstation cpus not used by the users.

The beginning of your script will be:

# you email address
#SBATCH --mail-user= xxx.yyy@epfl.ch # or any other email address that everyone, around the world, can use to send email messages to me
# how much time this process must run (hours:minutes:seconds)? 4 hours for this example
#SBATCH --time=04:00:00
# how much memory it needs ? 1 GB (1024MB) for the example. Different units can be specified using the suffix [K|M|G|T]
#SBATCH --mem=1G

If your job is running a simulation that is multi-threaded (or parallel), you can use more than one CPU/core by indicating the number of cores you want with:

#Numer of cores needed by the application (8 in this example)
#SBATCH --cpus-per-task=8
#and the number of nodes (physical computers) your program is supposed to use (you need at least 1)
#SBATCH --nodes=1

After this prolog, you can add directives for instructing the system about the messages you want to receive:

# this line instruct SLURM to send a mail when the job start and finish
#SBATCH --mail-type=begin,end

You can also tell SLURM where you want to put the output and errors messages.
By default the cluster will put the output and errors messages in 2 separate files (<name of the job>.e<jobID> for errors and <name of the job>.o<jobID> for the output)

# these lines instruct the cluster to use the job number (%J) to identify the error and output files
#SBATCH --error=$HOME/name.of.the.file.%J.err
#SBATCH --output=$HOME/name.of.the.file.%J.out

And then you might want to assign a name to your job, so you will know what the cluster is doing for you when you look at the list of running jobs (using the command squeue).

#Name of the job
#SBATCH --job-name=dummy-test

Another mandatory parameter is the queue (called partition in SLURM terminology) you want to use: to start always use the queue slurm-cluster:

# queue to be used
#SBATCH --partition slurm-cluster

if you want to use a particular feature (gpu, tensorflow, mathematica, matlab, etc., …), then you have to inform SLURM about your needs in term of the specific features you need and the number of them. in this case, SLURM will launch your program only in the nodes that have the requested feature(s).

# require this feature:
#SBATCH --gres=gpu:1

Now you can start the shell script commands:

# go to the directory where your job is
cd $HOME/.....

# print the name of the machine this job is running on and when the
# process start/finish (useful information during tests and initial debug)
echo "executed on $HOSTNAME"
echo "execution started at:  $(date)"

It's better to use the command srun to launch the executable command (just prefix srun to you normal command line), so SLURM can better manage the scheduling of the jobs. the use of srun is also mandatory in case of parallel computing.

srun ./name of the program and parameters you want to launch

echo "execution finished at: $(date)"

Once we attach all the lines from above we'll have a script (named dummy.slurm) that will look like this:

#!/bin/bash 

#SBATCH --job-name=dummy-test
#SBATCH --partition slurm-cluster
#SBATCH --mail-user=dummy.epfl@epfl.ch
#SBATCH --time=04:00:00
#SBATCH --mem=1024M
#SBATCH --cpu-per-task=8
#SBATCH --mail-type=begin
#SBATCH --mail-type=end
#SBATCH --gres=gpu:1

cd $HOME/.....
echo "executed on $HOSTNAME"
echo "execution started at:  $(date)"

srun ./name of the program and parameters you want to launch

echo "execution finished at: $(date)"

Now you just need to tell the cluster system that you want to run this job, but how you do that? pretty simple, you use the command sbatch, followed by the name of the script you just created. If you saved the previous example script as dummy.slurm in the current directory, you will want to launch this command from the shell:

$ sbatch dummy.slurm

After all this work, you just need to relax and wait until you receive the email messages from the queuing manager telling you about success or failure of your submissions. At this point you return to the directory where the output files are saved and check the results.
If you browse the the documentation we have on Batch Queuing System you'll find examples on how to use Matlab or Mathematica and some explanation about the directives and the commands available for the queuing system.

slurm-dummies.txt · Last modified: 2023/10/09 13:17 by admin