Skip to main content »

Trinity College Dublin

FSL

Introduction

For reference FSL's homepage is located at http://www.fmrib.ox.ac.uk/fsl/ FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data.

FSL at TCHPC

Before running FSL on our TCHPC clusters, please become familiar with the methods for connecting to our systems, and for transferring files.

Loading the FSL Environment

A module file has been created to set up your environment for FSL

module load tcin fsl/4.1.4

Requirements and notes

The FSL install at TCHPC has a customised 'fsl_sub' command, it currently only works correctly on the TCIN minicluster (8nodes), this customised script will parallelise various parts of the workflow. Please visit http://www.fmrib.ox.ac.uk/fsl/fsl/sgesub.html for more information on which components can be parallelised.

To do analysis users will need to login into 'tcin-n01' from 'lonsdale.tchpc.tcd.ie'

Patch for fsl_sub to work with Slurm

For those interested in the patch, it may need to be formated correctly before it will apply cleanly. This patch is simple and naive, and should probably be re-written to do multi-config submissions to the resource manager.
--- fsl_sub.orig        2010-06-11 13:03:35.279077000 +0100
+++ fsl_sub     2010-06-11 13:04:33.409821000 +0100
@@ -100,6 +100,10 @@
 fi
fi

+if [ "x$SLURM_JOB_ID" != "x" ] ; then
+       METHOD=SLURM
+fi
+

 ###########################################################################
 # The following auto-decides what cluster queue to use. The calling
@@ -123,6 +127,11 @@
    queue=verylong.q
 fi
     #echo "Estimated time was $1 mins: queue name is $queue"
+
+    # if slurm environment is detected use the compute partition, change this to suit
+    if [ $METHOD = SLURM ] ; then
+           queue=compute
+    fi
}


@@ -200,7 +209,7 @@
 # change. It also sets up the basic emailing control.
 ###########################################################################

-queue=long.q
+queue=compute
mailto=`whoami`@fmrib.ox.ac.uk
MailOpts="n"

@@ -364,6 +373,40 @@
    ;;

 ###########################################################################
+# SLURM method
+# this is a very naive way of doing things, its just to simply fire off all
+# the tasks individually to the resource manager
+###########################################################################
+
+       SLURM)
+               if [ $verbose -eq 1 ] ; then
+                       echo "Starting Slurm submissions..." >&2
+               fi
+               _SRMRAND=$RANDOM
+               _SRMNAME=$JobName$SRMRAND
+               echo "========================" >> sbatch.log
+               echo "= Starting submissions =" >> sbatch.log
+               echo "========================" >> sbatch.log
+               date >> sbatch.log
+while read line
+do
+        if [ "x$line" != "x" ] ; then
+sbatch -J $_SRMNAME -o "slurm-log-$_SRMNAME-%j-%N.out" -t 01:00:00 -p $queue -n 1 <> sbatch.log 2>&1
+       ;;
+
+###########################################################################
 # Don't change the following - this runs the commands directly if a
 # cluster is not being used.
 ###########################################################################

Last updated 26 Aug 2011Contact TCHPC: info | support.