User Tools

Site Tools


slurm-dummies

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
slurm-dummies [2020/03/02 16:53] – created adminslurm-dummies [2023/10/09 13:17] (current) admin
Line 1: Line 1:
 ====== S.L.U.R.M. ====== ====== S.L.U.R.M. ======
-====== workload manager ======+====== Workload Manager ======
  
 ===== for Dummies ===== ===== for Dummies =====
  
-**1st thing you need to know:** Using a slurm script is like if you're typing the commands from a shell. Therefore you must include in the script all the commands that you would use on the shell before/after running your program (if after the login you need to change directory to launch the job, you'll need to do it even from the batch script).\\+**1st things you need to know:** \\ 
 +Using a slurm script is like if you're typing the commands from a shell. Therefore you must include in the script all the commands that you would use on the shell before/after running your program (if after the login you need to change directory to launch the job, you'll need to do it even from the batch script).\\
 \\ \\
 Every instruction line for the queue manager start with #SBATCH, so\\ Every instruction line for the queue manager start with #SBATCH, so\\
Line 11: Line 12:
 # SBATCH...... : this is a comment\\ # SBATCH...... : this is a comment\\
  
-The mandatory directives that you must **always** include in the scripts are:+====The mandatory lines==== 
 + directives that you must **always** include in the scripts are:
   - Your email address. the official epfl address or another, but valid (worldwide), email address. **This address mus be always present, no matter if you instruct the system to send or not the email messages**.   - Your email address. the official epfl address or another, but valid (worldwide), email address. **This address mus be always present, no matter if you instruct the system to send or not the email messages**.
   - How much time your job must run (if the job runs over this limit the cluster manager will kill it). the minimum is 1 minute and there's no maximum limit.   - How much time your job must run (if the job runs over this limit the cluster manager will kill it). the minimum is 1 minute and there's no maximum limit.
-  - How much memory (RAM) your job will use. Please remember that if your job use more memory than the limit you put here the cluster manager will kill the job. the minimum is 512 Mbyte, currently (as for Jul 2015) the maximum is 64 Gbyte.+  - How much memory (RAM) your job will use. Please remember that if your job use more memory than the limit you put here, then the cluster manager will kill the job. the minimum is 512 Mbyte, currently (as for Feb 2020) the maximum is 250 Gbyte
 +  - How many nodes (computers) you're going to use with your script.
   - How many cores/cpu must be reserved for your job. If you don't include this parameter only one core/cpu will be assigned to your job and you cannot run more than a single threaded job.   - How many cores/cpu must be reserved for your job. If you don't include this parameter only one core/cpu will be assigned to your job and you cannot run more than a single threaded job.
 +  - **the name of the queue/partition** you want to use: currently only ''slurm-cluster'', ''slurm-gpu''  and ''slurm-ws'' are available.
 +
 +==== partitions (a.k.a. queues) ====
 +If you used other types of cluster management, you will already known the term ''queue'' to identify the type of nodes/jobs you want to use inside the clusters. in S.L.U.R.M. notation, queues are called **partitions**. The two terms are used to indicate the same entity and usage.
 +The 'partitions' ''slurm-cluster'', ''slurm-gpu''  and ''slurm-ws'' refer then to what kind of computer you want to use:
 +  - slurm-cluster: this includes all the real nodes dedicated to do just number crunching, most of the time you want to use this queue/partition.
 +  - slurm-gpu: this includes computers that have a gpu (nvidia, mostly) that can be used for HPC.
 +  - slurm-ws: this includes all the workstations that are sitting under your desks, programs that run a very shor time (1 hour top) can take advantage of the workstation cpus not used by the users.
  
 The beginning of your script will be: The beginning of your script will be:
 <code> <code>
 # you email address # you email address
-#PBS -M <my email address that everyone, around the world, can use to send email messages to me>+#SBATCH --mail-user= xxx.yyy@epfl.ch # or any other email address that everyone, around the world, can use to send email messages to me
 # how much time this process must run (hours:minutes:seconds)? 4 hours for this example # how much time this process must run (hours:minutes:seconds)? 4 hours for this example
-#PBS -l cput=04:00:00 +#SBATCH --time=04:00:00 
-# how much memory it needs ? 1 GB for the example +# how much memory it needs ? 1 GB (1024MB) for the example. Different units can be specified using the suffix [K|M|G|T] 
-#PBS -mem=1024mb+#SBATCH --mem=1G
 </code> </code>
-If your job is running a simulation that is multithreaded, you can use more than one cpu/core by indicating the number of cores you want with:+If your job is running a simulation that is multi-threaded (or parallel), you can use more than one CPU/core by indicating the number of cores you want with:
 <code> <code>
 #Numer of cores needed by the application (8 in this example) #Numer of cores needed by the application (8 in this example)
-#PBS -l ppn=8+#SBATCH --cpus-per-task=8 
 +#and the number of nodes (physical computers) your program is supposed to use (you need at least 1) 
 +#SBATCH --nodes=1
 </code> </code>
  
  
-After this //prolog// you can add directives for instructing the system about the messages you+After this //prolog//you can add directives for instructing the system about the messages you
 want to receive: want to receive:
  
 <code> <code>
-# this line instruct the PBS to send a mail when the job: (b) start (e) finish +# this line instruct SLURM to send a mail when the job start and finish 
-#PBS -m be +#SBATCH --mail-type=begin,end
-# you can substitute the previous directive with the following. +
-# this line means: do not send email messages +
-#PBS -m n+
 </code> </code>
  
-Also you can tell the PBS where you want to put the output and errors messages. By default the cluster will put the output and errors messages in 2 separate files (<name of the job>.e<jobID> for errors and <name of the job>.o<jobID> for the output), but maybe you prefer to have all these messages in one single file (<name of the job>.o<jobID>)+You can also tell SLURM where you want to put the output and errors messages.\\ 
 +By default the cluster will put the output and errors messages in 2 separate files (<name of the job>.e<jobID> for errors and <name of the job>.o<jobID> for the output)
  
 <code> <code>
-#Output and Error streams are redirected to a single stream (output file) +these lines instruct the cluster to use the job number (%J) to identify the error and output files 
-#PBS -j oe+#SBATCH --error=$HOME/name.of.the.file.%J.err 
 +#SBATCH --output=$HOME/name.of.the.file.%J.out
 </code> </code>
  
-And then you might want to assign a name to your job, so you will know what the cluster is doing for you when you look at the list of running jobs (using the command ''qstat -an1'').+And then you might want to assign a name to your job, so you will know what the cluster is doing for you when you look at the list of running jobs (using the command ''squeue'').
  
 <code> <code>
 #Name of the job #Name of the job
-#PBS -N exit_coupled+#SBATCH --job-name=dummy-test
 </code> </code>
  
-Now you can start the bash shell script commands :+Another mandatory parameter is the queue (called partition in SLURM terminology) you want to use: to start always use the queue ''slurm-cluster'': 
 +<code> 
 +# queue to be used 
 +#SBATCH --partition slurm-cluster 
 +</code> 
 + 
 +if you want to use a particular feature (gpu, tensorflow, mathematica, matlab, etc., ...), then you have to inform SLURM about your needs in term of the specific features you need and the number of them. 
 +in this case, SLURM will launch your program only in the nodes that have the requested feature(s). 
 +<code> 
 +# require this feature: 
 +#SBATCH --gres=gpu:
 +</code> 
 + 
 +Now you can start the shell script commands:
  
 <code> <code>
Line 69: Line 94:
 echo "execution started at:  $(date)" echo "execution started at:  $(date)"
  
-./name of the program and parameters you want to launch+</code> 
 +It's better to use the command srun to launch the executable command (just prefix srun to you normal command line), so SLURM can better manage the scheduling of the jobs. 
 +the use of ''srun'' is also mandatory in case of parallel computing. 
 +<code> 
 + 
 +srun ./name of the program and parameters you want to launch
  
 echo "execution finished at: $(date)" echo "execution finished at: $(date)"
Line 75: Line 105:
 </code> </code>
  
-Another thing to remember is that the output files (the ...o<jobID> and ....e<jobID>) created by the PBS system are placed inside the directory **from where you submitted the job**, not inside the directory from where the program is launched by the script (in other words all the "cd ..." directives inside the script aren't considered by the queue manager). +Once we attach all the lines from above we'll have a script (named dummy.slurm) that will look like this:
- +
-Once we attach all the lines from above we'll have a script that will look like this:+
 <code> <code>
-#PBS -M <my email address that everyone can use to send emails> +#!/bin/bash 
-#PBS -l cput=04:00:00 +
-#PBS -l mem=1024mb +
-#PBS -l ppn=8 +
-# you want to receive an email messages when your job is started and when it's +
-finished (or blocked) +
-#PBS -m be +
-# all the messages (output and errors) must go in a single file +
-#PBS -j oe +
-# the name you want to assign to this job +
-#PBS -N exit_coupled_test+
  
 +#SBATCH --job-name=dummy-test
 +#SBATCH --partition slurm-cluster
 +#SBATCH --mail-user=dummy.epfl@epfl.ch
 +#SBATCH --time=04:00:00
 +#SBATCH --mem=1024M
 +#SBATCH --cpu-per-task=8
 +#SBATCH --mail-type=begin
 +#SBATCH --mail-type=end
 +#SBATCH --gres=gpu:1
  
 cd $HOME/..... cd $HOME/.....
Line 96: Line 123:
 echo "execution started at:  $(date)" echo "execution started at:  $(date)"
  
-./name of the program and parameters you want to launch+srun ./name of the program and parameters you want to launch
  
 echo "execution finished at: $(date)" echo "execution finished at: $(date)"
Line 102: Line 129:
 </code> </code>
  
-Now you just need to tell the cluster system that you want to run this job, but how you do that? pretty simple, you use the command qsub (short for queue submit) followed by the name of the script you just created. If you saved the previous example script as test1.pbs in the current directory, you will want to launch this command from the shell:+Now you just need to tell the cluster system that you want to run this job, but how you do that? pretty simple, you use the command ''sbatch'', followed by the name of the script you just created. If you saved the previous example script as dummy.slurm in the current directory, you will want to launch this command from the shell:
  
 <code> <code>
-qsub test1.pbs+sbatch dummy.slurm
 </code> </code>
  
-<note> +After all this work, you just need to relax and wait until you receive the email messages from the queuing manager telling you about success or failure of your submissions At this point you return to the directory where the output files are saved and check the results.\\ 
-If you like, you can use the absolute path to indicate the script to launch, but **remember** that the output files will be written inside the directory from **where** you executed the qsub program. +If you browse the the documentation we have on [[1slurm|Batch Queuing System]] you'll find examples on how to use Matlab or Mathematica and some explanation about the directives and the commands available for the queuing system.
-</note> +
- +
-After all this work, you just need to relax and wait until you receive the email messages from the queuing manager telling you about success or failure. At this point you return to the directory where the output files are saved and check the results.\\ +
-If you browse the the documentation we have on [[sge|Batch Queuing System]] you'll find examples on how to use Matlab or Mathematica and some explanation about the directives and the commands available for the queuing system.+
  
slurm-dummies.1583168030.txt.gz · Last modified: 2020/03/02 16:53 by admin