Skip to main content »

Trinity College Dublin

TBSS - Tract-Based Spatial Statistics

Where to run

Note that these scripts are designed to be executed on the 'TCIN mini-cluster'. This is a small 8-node cluster for use by the TCIN group.

Please contact ops@tchpc.tcd.ie if you have any queries about this.

In particular, you must log in to the TCIN mini-cluster node via lonsdale, but you should not run on lonsdale.

File transfer to/from your own workstation is not available from the mini-cluster. That must still be done via lonsdale (e.g. with FileZilla). However, please note that you have the same home directory on lonsdale as the TCIN mini-cluster, so this should be straight-forward.

Logging in

First log in to lonsdale (via SSH / Putty as normal). You should have the following prompt:

    [user@lonsdale01 ~]$ 

Then, log in to the mini-cluster. Note that you must be logged in to lonsdale first for this step to work.

    [user@lonsdale01 ~]$ 
    [user@lonsdale01 ~]$ ssh tcin-n01
    ...
    [user@tcin-n01 ~]$ 
    [user@tcin-n01 ~]$ 

Loading the software

You must load the 'tcin' and 'fsl' modules in order to gain access to the software in the submission scripts below.

    module load tcin fsl

The module load .. line can either be added to your ~/.bashrc startup file, or else can be added directly to the various dti-part*.sh scripts below. If adding it to the scripts, it must be placed after the #SBATCH lines, and before the tbss_* commands.

Proposed Workflow

Assuming this type of directory structure

- Study1
 - data
 - scripts

'Study1' is the folder containing all your datafiles and scripts. 'data' contains the mri data files. TBSS has four basic steps,

  1. tbss_1_preproc
  2. tbss_2_reg
  3. tbss_3_postreg
  4. tbss_4_prestats
  5. stats (randomise etc...)

Step 1 - tbss_1_preproc

We first create a slurm submission script (located in the scripts directory)

Contents of dti-part1.sh:

#!/bin/sh
#SBATCH -n 1           # 1 cores
#SBATCH -t 1:00:00     # 1 hour
#SBATCH -p compute      # partition name
#SBATCH -U thpsy   # your project name - contact Ops if unsure what this is
#SBATCH -J TBSS18_18  # sensible name for the job

cd ../data

echo "################################################################################"
echo "### tbss preprocessing"
tbss_1_preproc *.nii.gz

We then submit this script to the queue system.

sbatch dti-part1.sh

Step 1 should not take too long.

Step 2 - tbss_2_reg

We then create another slurm script, again located in the scripts directory.

Contents of dti-part2.sh:

#!/bin/sh
#SBATCH -n 1           # 1 cores
#SBATCH -t 1:00:00     # 1 hour
#SBATCH -p compute      # partition name
#SBATCH -U thpsy   # your project name - contact Ops if unsure what this is
#SBATCH -J TBSS18_18  # sensible name for the job

cd ../data

echo "################################################################################"
echo "### tbss step 2 (registration using study specific template)"
tbss_2_reg -n

This script should only be run when Step 1 is completed.

We then submit this script to the queue system:

sbatch dti-part2.sh

This step usually takes the longest. The tbss_2_reg script uses a modified version of the 'fsl_sub' script to take advantage of the queuing system. It will automatically submit all the sub-tasks to the queue system. This may be in the order of thousands of jobs depending on the size of your study.

You must wait for *all* the jobs to complete before moving onto the next step. You can check the status of your jobs by doing the following

squeue -u USERNAME -l

where USERNAME is your own username on the clusters.

Step 3 - tbss_3_postreg and tbss_4_prestats

Once all the jobs from Step 2 has completed, prepare the following script (again located in your scripts directory) You may want to separate these two steps if you have a large study.

Contents of dti-part3.sh:

#!/bin/sh
#SBATCH -n 1           # 1 cores
#SBATCH -t 1:00:00     # 1 hour
#SBATCH -p compute      # partition name
#SBATCH -U thpsy   # your project name - contact Ops if unsure what this is
#SBATCH -J TBSS18_18  # sensible name for the job

cd ../data

echo "################################################################################"
echo "### tbss step 3 (post registration)"
tbss_3_postreg -S

echo "########################################################################"
echo "### tbss step 4 (scaling)"
tbss_4_prestats 0.2

Submit the jobs:

sbatch dti-part3.sh

Step 4 - Stats

Once all the above has been done, you can do the following (or something similar) in the stats directory.

design_ttest2 design 18 18 -n
randomise -i all_FA_skeletonised -o tbss -m mean_FA_skeleton_mask -d design.mat -t design.con -n 5000 -x --T2 -V

Or you can submit it to the queue system by creating the following slurm script in your scripts directory:

#!/bin/sh
#SBATCH -n 1           # 1 cores
#SBATCH -t 1:00:00     # 1 hour
#SBATCH -p compute      # partition name
#SBATCH -U thpsy   # your project name - contact Ops if unsure what this is
#SBATCH -J TBSS18_18  # sensible name for the job

cd ../data/stats
design_ttest2 design 18 18 -n
randomise_parallel -i all_FA_skeletonised -o tbss -m mean_FA_skeleton_mask -d design.mat -t design.con -n 5000 -x --T2 -V

We then submit this script to the queue system:

sbatch dti-part4.sh

If you are running randomise_parallel, you will need to run ./tbss.defragment once all the sub-jobs are completed to recombine the data.

To view the results, go into the stats directory and run the following commands:

tbss_fill tbss_tfce_corrp_tstat1 0.95 mean_FA tbss_fill
fslview mean_FA -b 0,0.6 mean_FA_skeleton -l Green -b 0.2,0.7 tbss_fill -l Red-Yellow

It is best to refer back to the original TBSS pages for how to use these programs, http://www.fmrib.ox.ac.uk/fsl/tbss/index.html


Last updated 27 Jun 2014Contact TCHPC: info | support.