Job submission

“sbatch script.sh” command launch a job to compute.
An example for the submission of a lammps computation.

#!/bin/sh
#SBATCH --job-name=job 
#SBATCH --partition=normal            # submission queue (normal or long or bigmem or bigpu or quadgpu)
#SBATCH --time=1-1:00:00            # 1-1 means one day and one hour
#SBATCH --mail-type=ALL    # Type can be BEGIN, END, FAIL, ALL(any statchange).
#SBATCH --output=job_seq-%j.out        # if --error is absent, includes also the errors
#SBATCH --mem=8G    # T-tera, G-giga, M-mega
#SBATCH --nodes=4   # 4 cpus CPU Numbers
#SBATCH --mail-user=votre_mail@domain.precis
echo "-----------------------------------------------------------"
echo "hostname                     =   $(hostname)"
echo "SLURM_JOB_NAME               =   $SLURM_JOB_NAME"
echo "SLURM_SUBMIT_DIR             =   $SLURM_SUBMIT_DIR"
echo "SLURM_JOBID                  =   $SLURM_JOBID"
echo "SLURM_JOB_ID                 =   $SLURM_JOB_ID"
echo "SLURM_NODELIST               =   $SLURM_NODELIST"
echo "SLURM_JOB_NODELIST           =   $SLURM_JOB_NODELIST"
echo "SLURM_TASKS_PER_NODE         =   $SLURM_TASKS_PER_NODE"
echo "SLURM_JOB_CPUS_PER_NODE      =   $SLURM_JOB_CPUS_PER_NODE"
echo "SLURM_TOPOLOGY_ADDR_PATTERN  = $SLURM_TOPOLOGY_ADDR_PATTERN"
echo "SLURM_TOPOLOGY_ADDR          =   $SLURM_TOPOLOGY_ADDR"
echo "SLURM_CPUS_ON_NODE           =   $SLURM_CPUS_ON_NODE"
echo "SLURM_NNODES                 =   $SLURM_NNODES"
echo "SLURM_JOB_NUM_NODES          =   $SLURM_JOB_NUM_NODES"
echo "SLURMD_NODENAME              =   $SLURMD_NODENAME"
echo "SLURM_NTASKS                 =   $SLURM_NTASKS"
echo "SLURM_NPROCS                 =   $SLURM_NPROCS"
echo "SLURM_MEM_PER_NODE           =   $SLURM_MEM_PER_NODE"
echo "SLURM_PRIO_PROCESS           =   $SLURM_PRIO_PROCESS"
echo "-----------------------------------------------------------"

# USER Commands
# Move to /scratch or launch from it
# mv directory
# special commands for openmpi/gcc

module load openmpi/gcc/64/1.10.7

# lammps

module load lammps
./lmp-mpi etc arg agr < file.txt
# end of the USER commands

Launching several tasks in the same job.

Here is a solution if you want to run several tasks in the same job :

Let’s say you want to run 4 commands at the same time called my_cmd1, my_cmd2, my_cmd3 and my_cmd4, then add these 4 commands in your slurm script while adding a & at the end of the line (or command). Don’t forget to add the wait command at the end of your script. Otherwise your job will end before the 4 commands have had time to run. Don’t forget to set the number of tasks you want to run in parallel (e.g. #SBATCH -ntasks-per-node=4)

This gives :

#!/bin/sh
#SBATCH --job-name=job
#SBATCH --partition=normal            # submission queue (normal or long or bigmem or bigpu or quadgpu)
#SBATCH --time=1-1:00:00            # 1-1 means one day and one hour
#SBATCH --ntasks-per-node=4            # 1-1 means one day and one hour
# Define all your sbatch parameters you need here
# ...

echo "-----------------------------------------------------------"
echo "hostname                     =   $(hostname)"
echo "SLURM_JOB_NAME               =   $SLURM_JOB_NAME"
# displays all pieces of information that you need
# ...
echo "-----------------------------------------------------------"

./my_cmd1&
./my_cmd2&
./my_cmd3&
./my_cmd4&
wait