Skip to main content »

Trinity College Dublin

MPI Implementation Differences for Slurm

When submitting jobs to Slurm, there are slight differences depending on which MPI implementation the code was compiled with.

Launching an openmpi compiled binary

#!/bin/sh
#SBATCH -n 16           # 16 cores
#SBATCH -t 1-03:00:00   # 1 day and 3 hours
#SBATCH -p compute      # parition name
#SBATCH -U chemistry    # your project name - contact Ops if unsure what this is
#SBATCH -J my_job_name  # sensible name for the job

mpirun ./cpi.x

Launching an mvapich compiled binary

#!/bin/sh
#SBATCH -n 16           # 16 cores
#SBATCH -t 1-03:00:00   # 1 day and 3 hours
#SBATCH -p compute      # parition name
#SBATCH -U chemistry    # your project name - contact Ops if unsure what this is
#SBATCH -J my_job_name  # sensible name for the job

srun --mpi=mvapich ./cpi.x

Launching an mvapich2 compiled binary

Before you can launch a mvapich2 job, it must be linked with the slurm pmi library, this can be done by...

mpicc -L/usr/lib64 -lpmi ...

Then the submission script is as follows:

#!/bin/sh
#SBATCH -n 16           # 16 cores
#SBATCH -t 1-03:00:00   # 1 day and 3 hours
#SBATCH -p compute      # parition name
#SBATCH -U chemistry    # your project name - contact Ops if unsure what this is
#SBATCH -J my_job_name  # sensible name for the job

srun --mpi=none ./cpi.x

Last updated 08 Feb 2010Contact TCHPC: info | support.