Obtaining interactive resources with Windows
This page will help you to configure the software required to obtain interactive resources on a Windows desktop.
First of all, you must install XMing to enable graphical forwarding from your Windows desktop to the Linux server.
Obtaining an Interactive Resource
Once XMing is running, connect to the server using Putty with X11 Forwarding enabled.
Request an allocation of 1 node with the 'salloc' command, giving the following parameters:
|-N 1||Request 1 physical node.|
|-p compute||Request the 'compute' partition.|
|--reservation=application||Request the 'application' reservation.|
|-t 4:00:00||Request 4 hours of time.|
|-U <your project>||Request resources from the <your project> project.|
[neil@lonsdale01 ~]$ salloc -N 1 -p compute --reservation=application -t 4:00:00 -U HPC_13_XXXXX salloc: Job is in held state, pending scheduler release salloc: Pending job allocation 10077 salloc: job 10077 queued and waiting for resources salloc: job 10077 has been allocated resources salloc: Granted job allocation 10077 <<JOB #10077>> [neil@lonsdale01 ~]$
You have now been allocated a single node. This has been placed in the $SLURM_NODELIST environment variable.
Note that the prompt changes, to include the job id.
You can check which node(s) have been allocated:
<<JOB #10077>> [neil@lonsdale01 ~]$ echo $SLURM_NODELIST lonsdale-n016 <<JOB #10077>> [neil@lonsdale01 ~]$
You can now ssh (with X forwarding enabled) to the allocated node.
<<JOB #10077>> [neil@lonsdale01 ~]$ ssh -Y $SLURM_NODELIST Last login: Thu Jul 16 13:36:43 2009 from 10.141.255.251 [neil@lonsdale-n016 ~]$
Again, note that the prompt changes, this time to reflect that you are now logged into your allocated node, rather than just being logged into the cluster headnode.
Run your GUI application. (e.g.
[neil@lonsdale-n016 ~]$ module load apps xmgrace [neil@lonsdale-n016 ~]$ xmgrace [neil@lonsdale-n016 ~]$
Log out or job time-out
Once you have finished running your application, you should log out of the allocated node (type
exit), and then finish the allocation (again type
This will free up the resources again for other users of the system.
[neil@lonsdale-n016 ~]$ exit Connection to lonsdale-n016 closed. <<JOB #10077>> [neil@lonsdale01 ~]$ <<JOB #10077>> [neil@lonsdale01 ~]$ exit salloc: Relinquishing job allocation 10077 salloc: Job allocation 10077 has been revoked. [neil@lonsdale01 ~]$ [neil@lonsdale01 ~]$
Finally, you are back to a normal prompt on the cluster headnode, with no job id.
Note that if you run out of your allocated time, then the job will be killed automatically, leaving you back on the cluster headnode.