You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 46 Next »

Relion is installed on our clusters. We make use of singularity containers to make the program portable across different HPC architectures and also to facilitate compilation and redistribution.

Data Location

We ask that you keep your relion data in the following directories:

  • Proposal based: if you are working on data from an official Proposal with a designated proposal code (eg CS10, C002, etc), then you should ask for a 'group' directory under
     /sdf/group/cryoem/g/<PROPOSAL>.
  • Team project: if you intend to share your data with other collaborators (either internal or external), please also ask for a directory under the 'group' space.


Getting Started


Two ways to access the graphical user interface

  • Using 'X Server' on your local (laptop/desktop) machine.
    • You can install XQuartz on mac. You should already be running one on Linux. 
  • Using FastX
    • https://fastx.slac.stanford.edu:3443/
    • Launch a session (centos7- or iris- gnome seem to work well), selecting @slac_public_login
    • At the terminal, treat it as your 'local' computer, i.e. ssh -Y ... into the sdf server as indicated below


To bring up the relion GUI:

# log onto the 'login' node of the cluster: use your username in place of <username>
# the -Y enables X forwarding to allow the GUI to display locally on your laptop.
ssh -Y <username>@sdf-login01.slac.stanford.edu
 
# by default you will be logged into your 'AFS Home'. Please do NOT use this space for your CryoEM work; there is 
# insufficient disk space and it is also not very performant. You should use the directory defined above.
cd /sdf/group/cryoem/g/<PROPOSAL>
 
# if you're starting a new relion 'project':
mkdir my_new_project
cd my_new_project

# to see what modules are installed
module avail

# load the modules you plan to use within relion e.g.:
module load motioncor2/1.3.2
module load ctffind/4.1.13
 
# start the relion GUI
module load relion/3.1.2
relion

# answer Yes at prompt


Using the Batch Queues


You can choose to run relion jobs either 'locally' (ie on the login node), or submit to the batch queues. As the login node is a shared interactive resource, it is important not to run long or intensive jobs 'locally but to submit them to dedicated batch queue nodes where your job will run uninterrupted (and often with much higher performance)


The default settings provided by the relion container should work.


Checking on Batch Jobs

As you click submit job to the queue on the relion gui, in the terminal window that you launched relion from, it should output the jobid.

You can check the queue status using

  • module load slurm
  • squeue

And you can see more details about your specific job via:

  • scontrol show job <jobid>

Using Scratch

Under the Compute tab in relion, there are options to prestage the data into either RAM or (say) local disk to speed up your job by increasing the performance of access to the data.

under

Copy particles to scratch directory, set it to /scratch/${USER}/${SLURM_JOB_ID}


Further information at https://hpc.nih.gov/training/handouts/relion.pdf

GPU vs. CPU


Migrating from LSF

If you have an existing project that was using the SLAC LSF job submission, you will need to update the fields under the job 'Running' tab:





Queue Name--partition=cryoem --account=cryoem --qos=normal
Queue Submit Commandsbatch
Walltime1-00:00:00
Memory per CPU3800m
GPUs (clear to remove)--gpus 4
Additional Directives (1)

Additional Directives (2)-N 1
Script/sdf/group/cryoem/sw/images/relion/slurm-batch-submission.script




Other notes:




  • No labels