Skip to main content »

Trinity College Dublin

Accessing the GPGPU TESLA (CUDA resources on lonsdale) enabled nodes

There are currently two methods for accessing the TESLA (CUDA capable) nodes on the lonsdale compute partition.

  • For work-hours interactive and batch jobs, add this flag to your script/salloc: "--reservation=cuda". More details below.
  • For out-of-hours batch jobs, add this flag to your script/salloc: "--gres=gpu:1". More details below.

Please note that we currently only have two machines with TESLA cards and each node only has 2 cards.

Work-hours Interactive and Batch Jobs

There is a reservation in place from 8am til 8pm to allow for interactive and batch use of the CUDA/GPU-enable TESLA nodes on the lonsdale compute partition.

To access it, add "--reservation=cuda" to your sbatch, srun or salloc commands.

Out-of-hours Batch Jobs

Please add "--gres=gpu:1" to your sbatch, srun or salloc commands. Examples are given below.

$ salloc --gres=gpu:1 -p compute -N 1 -t 01:00:00 -U my_project_name

or

$ srun --gres=gpu:1 -p compute -N 1 -t 01:00:00 -U my_project_name hostname
lonsdale-n129.cluster

or

#!/bin/sh
#SBATCH -N 1            # 1 node
#SBATCH -t 1-03:00:00   # 1 day and 3 hours
#SBATCH -p compute      # partition name
#SBATCH -U my_project   # your project name - contact Ops if unsure what this is
#SBATCH -J my_job_name  # sensible name for the job
#SBATCH --gres=gpu:1

mpirun ./mycuda.x

Last updated 13 Feb 2013Contact TCHPC: info | support.