RELION is installed on our HPC clusters. We make use of singularity containers to make the program portable across different HPC architectures and also to facilitate compilation and redistribution.
We ask that you keep your RELION data in the following directories:
/sdf/group/cryoem/g/<PROPOSAL>
.Two ways to access the graphical user interface
Enter the following in the form fields:
Command:
xterm -ls -e "ssh sdf-login" |
Window Mode (either works):
Single – Single window mode in which the session is contained to one window (Set a window size. Default 1024 x 768. Changing this doesn't seem to improve resolution)Click "Launch" to start the FastX session. A new tab/window will pop up (depending on your browser settings) with an open xterm window prompting you to log in
To bring up the RELION GUI:
# Following the instructions above will land you in your 'SDF Home' directory. DO NOT use this space for your # CryoEM work; there is insufficient disk space (quota: 25GB) and it is also not very performant. You should # use the project directory defined in the "Data Location" section above, e.g.: cd /sdf/group/cryoem/g/<PROPOSAL> # if you're starting a new RELION 'project': mkdir my_new_project cd my_new_project # to see what modules are installed: module avail # load the modules you plan to use within RELION e.g. (module versions in this example are the most current as of # 2022-09-28): module load motioncor2/1.4.4 module load ctffind/4.1.13 module load relion/ver4.0 # start the relion GUI relion |
You can choose to run RELION jobs either 'locally' (ie on the login node), or submit to the batch queues. As the login node is a shared interactive resource, it is important not to run long or intensive jobs locally but to submit them to dedicated batch queue nodes where your job will run uninterrupted (and often with much higher performance) |
The default settings provided by the RELION container should work.
As you click submit job to the queue on the RELION gui, in the terminal window that you launched RELION from, it should output the jobid.
You can check the queue status using
module load slurm
squeue
And you can see more details about your specific job via:
scontrol show job <jobid>
Under the Compute tab in RELION, there are options to pre-stage the data into either RAM or (say) local disk to speed up your job by increasing the performance of access to the data.
under
Copy particles to scratch directory, set it to /scratch/${USER}/${SLURM_JOB_ID}
Further information at https://hpc.nih.gov/training/handouts/relion.pdf
Migrating from LSF
If you have an existing project that was using the SLAC LSF job submission, you will need to update the fields under the job 'Running' tab:
Queue Name | --partition=cryoem | |
Queue Submit Command | sbatch | |
Walltime | 1-00:00:00 | |
Memory per CPU | 3800m | |
GPUs (clear to remove) | --gpus 4 | |
Additional Directives (1) | ||
Additional Directives (2) | -N 1 | |
Script | /sdf/group/cryoem/sw/images/relion/slurm-batch-submission.script | |
Other notes: