On RHEL6 Machines

There is a manually compiled version with MPI available:

ssh -Y rhel6-64.slac.stanford.edu

you will need to load eman2 via:

 

export MODULEPATH=/afs/slac.stanford.edu/package/spack/share/spack/modules/linux-rhel6-x86_64:/usr/share/Modules/modulefiles:/etc/modulefiles

module load eman2-master-gcc-4.9.4-tmcs6g7

then the programs for eman2 should available via command line.

In order to submit jobs into the batch system, you will need to create a script (using "vi bsub-mpi-example.sh" or similar)

#!/bin/bash
#
#BSUB -a mympi                            # set parallel operating environment
#BSUB -P cyroem                           # project code
#BSUB -J eman2-test                      # job name
#BSUB -n 64                               # request one slot for this job
#BSUB -W 72:00                            # Job wall clock limit hh:mm
#BSUB -q bulletmpi                        # queue
#BSUB -e errors-%J.log                    # error file name in which %J is replaced by the job ID
#BSUB -o output-%J.log                    # output file name in which %J is replaced by the job ID
#BSUB -B

# get necessary bins and libs in place
export MODULEPATH=/afs/slac.stanford.edu/package/spack/share/spack/modules/linux-rhel6-x86_64:/usr/share/Modules/modulefiles:/etc/modulefiles

module load eman2-master-gcc-4.9.4-tmcs6g7

 
<<<eman2 command line here>>>

you can then submit the job via:

bsub < bsub-mpi-example.sh

please note that you will need to tune the number of nodes (-n) and the wall clock times (-W) to suit the command line arguments for eman2.

you can then monitor the progress of your job via the bjobs command. you can also monitor the continuous live output from your job using the bpeek command.

 

On GPU nodes

 

NB the GPU capabilities of EMAN2 are considered beta (my words). please use with caution

The version of EMAN2 on the ocio-gpu nodes are NOT MPI enabled. They have been compiled with the expectation that only simple local computation is required.

 

ssh -Y ocio-gpu01.slac.stanford.edu

 

There is a bunch of modules that are required to be loaded prior to a functional EMAN2 instance:

 

module load eman2-master-gcc-4.8.5-pri5spm

 

(i'll find a way of auto loading the others in the future)

As long as you ssh in with the -Y option (and you have a x server running on your local machine) you should be able to bring up the relevant GUIs.

 

 

 

 

Spack Notes

GPU: 

spack -k install -v   eman2+cuda % gcc ^fftw+openmp~mpi  ^cmake ~doc+ncurses+openssl+ownlibs~qt  ^ncurses+symlinks ^hdf5~mpi ^qt@4.8.6%gcc@4.8.5+dbus~examples~gtk~krellpatch+opengl+phonon~webkit ^mesa+llvm+swrender ^mesa-glu+mesa

 

Non-GPU + MPI

spack install --keep-stage  -v  eman2+mpi %gcc@4.9.4 ^fftw+openmp+mpi ^openmpi@1.10.3~cuda~java+disable-dlopen schedulers=lsf ^boost@1.58.0  ^cmake ~doc+ncurses+openssl+ownlibs~qt  ^ncurses+symlinks ^hdf5+mpi ^qt@4.8.6+dbus+opengl+phonon^mesa+swrender
 

 

 

  • No labels