Synge

Synge is a HPC cluster available to Trinity Researchers.

The hardware was donated by the Irish Centre for High-End Computing and sourced for Trinity Researchers by Professor Stefano Sanvito.

Access

To request access to Synge please contact Research IT Support: rit-support@tcd.ie. Research IT will request approval from the authorising parties for access requests.

Synge is accessible from the College network, including the VPN. To login please connect to synge.tchpc.tcd.ie using the usual SSH instructions and use your Research IT account credentials. E.g. (ensure to replace username with your Research IT username):

ssh -l username synge.tchpc.tcd.ie.

Hardware

Each compute node has 40 Intel Xeon Gold 6148 CPU's @ 2.40GHz, (2 sockets, each of 20 cores).

192GB of RAM memory is the standard amount of memory which is in most nodes. 3 of the nodes, (synge-n[02-05]), have 1.5TB of RAM memory.

2 of the nodes, (synge-n[01,02]), have 2 NVIDIA Tesla V100 16GB GPU cards.

File Systems

There are two file systems, /home and /scratch. The user quota in /home is 50GB per user. The default /scratch quota is 100GB per user.

Software

Software is installed with our usual modules system. You can view the available software with module av and load software with the module load ... command.

Specific instructions for some installed software follows.

Intel Compilers

. /home/support/intel/oneapi-2024.1.0.596/setvars.sh
. /home/support/intel/oneapi-2025.1.0.651/setvars.sh
. /home/support/intel/oneapi-2025.2.1.044/setvars.sh

FHI-Aims

. /home/support/intel/oneapi-2025.2.1.044/setvars.sh

Then you can run it from:

/home/support/apps/fhi-aims.250822_-_oneapi_2025_2

VASP

There are a few sample makefile.include files for VASP in /home/support/apps/vasp.

Siesta

Available from the module system: module load siesta.

LAMMPS

Available from the module system: module load lammps, which will load the latest version. If you need a different LAMMPS version use module av to look for all the available versions.

CP2K

Available from the module system: module load cp2k.

ORCA

Version 6.1.0 is available from the module system: modul load orca,

Quantum expresso

Available from the module system: module load quantum-espresso.

MolForge

Compiled in /home/support/apps/MolForge.

Running jobs

Running jobs must be done via the Slurm scheduler.

Batch job example parameters

E.g. batch script parameters.

#!/bin/bash
#SBATCH -N 1
#SBATCH --time=01:00:00

module load mpi/latest

mpirun prog.x

Remember to use the sbatch command to submit your batch script to the scheduler, e.g. sbatch submission-script.sh.

Interactive allocation examples

salloc -N 1 --time=01:00:00

GPU access

Requesting an interactive GPU allocation

salloc -N 1 --gres=gpu:1

Or to request 2 GPU's interactively:

salloc -N 1 --gres=gpu:2

Batch script submission parameters, 1 GPU:

#SBATCH --gres=gpu:1

Or for 2 GPU's:

#SBATCH --gres=gpu:2

Get queue information including details of GPU's available and their state:

sinfo -o "%.5a %.10l %.6D %.6t %.20N %G"

Caveats and Warnings

No user data on the cluster is backed up. Ensure to save important data elsewhere.

The performance of the shared storage is expected to be extremely limited as it is only available over the 1G management network and the storage servers in use were not intended to be used as a storage solution.

Because of the high power draw from the equipment and limited electrical supply for the cluster electrical power for the cluster may be unstable and lead to nodes or the cluster as a whole failing.