Relion is installed on our clusters. We make use of singularity containers to make the program portable across different HPC architectures and also to facilitate compilation and redistribution.
We ask that you keep your relion data in the following directories:
/sdf/group/cryoem/g/<PROPOSAL>
.Two ways to access the graphical user interface
To bring up the relion GUI:
# log onto the 'login' node of the cluster: use your username in place of <username> # the -Y enables X forwarding to allow the GUI to display locally on your laptop. ssh -Y <username>@sdf-login01.slac.stanford.edu # by default you will be logged into your 'AFS Home'. Please do NOT use this space for your CryoEM work; there is # insufficient disk space and it is also not very performant. You should use the directory defined above. cd /sdf/group/cryoem/g/<PROPOSAL> # if you're starting a new relion 'project': mkdir my_new_project cd my_new_project # to see what modules are installed module avail # load the modules you plan to use within relion e.g.: module load motioncor2/1.3.2 module load ctffind/4.1.13 # start the relion GUI module load relion/3.1.2 relion # answer Yes at prompt |
You can choose to run relion jobs either 'locally' (ie on the login node), or submit to the batch queues. As the login node is a shared interactive resource, it is important not to run long or intensive jobs 'locally but to submit them to dedicated batch queue nodes where your job will run uninterrupted (and often with much higher performance) |
The default settings provided by the relion container should work.
As you click submit job to the queue on the relion gui, in the terminal window that you launched relion from, it should output the jobid.
You can check the queue status using
module load slurm
squeue
And you can see more details about your specific job via:
scontrol show job <jobid>
Under the Compute tab in relion, there are options to prestage the data into either RAM or (say) local disk to speed up your job by increasing the performance of access to the data.
under
Copy particles to scratch directory, set it to /scratch/${USER}/${SLURM_JOB_ID}
Further information at https://hpc.nih.gov/training/handouts/relion.pdf
Migrating from LSF
If you have an existing project that was using the SLAC LSF job submission, you will need to update the fields under the job 'Running' tab:
Queue Name | --partition=cryoem --account=cryoem --qos=normal | |
Queue Submit Command | sbatch | |
Walltime | 1-00:00:00 | |
Memory per CPU | 3800m | |
GPUs (clear to remove) | --gpus 4 | |
Additional Directives (1) | ||
Additional Directives (2) | -N 1 | |
Script | /sdf/group/cryoem/sw/images/relion/slurm-batch-submission.script | |
Other notes: