Skip to main content »

Trinity College Dublin

Linux or Mac OSX

Accessing the clusters to obtain an interactive node with Linux.

SSH (with X forwarding enabled) to the headnode.

[neil@clapham ~]$ ssh -Y neil@lonsdale.tchpc.tcd.ie

Accessing the clusters to obtain an interactive node with Mac OS X.

In order to use interactive resources X11 libraries are required to display the GUI of the application you wish to use. The latest versions of Mac OS X no longer have the X11 libraries built in as per their support statement at http://support.apple.com/kb/HT5293

In order to use the GUI applications from a new OS X version you will need to install the X11 libraries from the XQuartz project: http://xquartz.macosforge.org. Once installed you will need to log out and back in again for them to be detected.

Then you can SSH (with X forwarding enabled) to the headnode.

[neil@clapham ~]$ ssh -Y neil@lonsdale.tchpc.tcd.ie

Requesting an interactive allocation to run your GUI application from

Request an allocation of 1 node with the 'salloc' command, giving the following parameters:

-N 1Request 1 physical node.
-p computeRequest the 'compute' partition.
--reservation=applicationRequest the 'application' reservation.
-t 4:00:00Request 4 hours of time.
-U <your project>Request resources from the <your project> project.
[neil@lonsdale01 ~]$ salloc -N 1 -p compute --reservation=application -t 4:00:00 -U HPC_13_XXXXX
salloc: Job is in held state, pending scheduler release
salloc: Pending job allocation 10077
salloc: job 10077 queued and waiting for resources
salloc: job 10077 has been allocated resources
salloc: Granted job allocation 10077
<<JOB #10077>> [neil@lonsdale01 ~]$

You have now been allocated a single node. This has been placed in the $SLURM_NODELIST environment variable.

Note that the prompt changes, to include the job id.

You can check which node(s) have been allocated:

<<JOB #10077>> [neil@lonsdale01 ~]$ echo $SLURM_NODELIST
lonsdale-n016
<<JOB #10077>> [neil@lonsdale01 ~]$

You can now ssh (with X forwarding enabled) to the allocated node.

<<JOB #10077>> [neil@lonsdale01 ~]$ ssh -Y $SLURM_NODELIST
Last login: Thu Jul 16 13:36:43 2009 from 10.141.255.251
[neil@lonsdale-n016 ~]$

Again, note that the prompt changes, this time to reflect that you are now logged into your allocated node, rather than just being logged into the cluster headnode.

Run your GUI application. (e.g. xmgrace)

[neil@lonsdale-n016 ~]$ module load apps xmgrace
[neil@lonsdale-n016 ~]$ xmgrace
[neil@lonsdale-n016 ~]$

Log out or job time-out

Once you have finished running your application, you should log out of the allocated node (type exit), and then finish the allocation (again type exit).

This will free up the resources again for other users of the system.

[neil@lonsdale-n016 ~]$ exit
Connection to lonsdale-n016 closed.
<<JOB #10077>> [neil@lonsdale01 ~]$ 
<<JOB #10077>> [neil@lonsdale01 ~]$ exit
salloc: Relinquishing job allocation 10077
salloc: Job allocation 10077 has been revoked.
[neil@lonsdale01 ~]$ 
[neil@lonsdale01 ~]$ 

Finally, you are back to a normal prompt on the cluster headnode, with no job id.

Note that if you run out of your allocated time, then the job will be killed automatically, leaving you back on the cluster headnode.


Last updated 19 Aug 2014Contact TCHPC: info | support.