Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Users need to do the following

  • use bash as your default shell, not tcsh or csh (conda doesn't support csh)
    • to switch to bash at LCLS (a separate computing environment than the rest of SLAC) email pcds-it-l at slac.stanford.edu
  • Either
    • run the command: source /reg/g/psdm/etc/ana_env.sh as usual (explained in the psana python setup example)
    • run the command: source conda_setup
  • Or, if you don't need to use the old RPM based release system
    • run the command: source /reg/g/psdm/bin/conda_setup

...

  • source conda_setup -h for full help on the script
  • after sourcing conda_setup, you can execute the undo_conda command to restore your linux environment variables to what they were beforehand.
  • see conda ana release notes Conda Release Notes for release notes on these the conda environments
  • to switch to an older environment called nameconda environment, for example to switch to ana-1.0.4, you can use source conda_setup with the again, but now pass --env name arguments, or ana-1.0.4 to conda_setup. You can also, having sourced conda_setup once, use the standard conda command
    source activate name command.activate ana-1.0.4
    to activate ana-1.0.4

Newer Package Versions

The conda environments will keep up with the latest versions for many standard python packages. Some notable version changes from the RPM based releases (as of December 2016):

  • ipython, version 2.3 -> 5.1
  • matplotlib, version 1.4.3 -> 1.5.1
  • openmpi, version 1.8.3 -> 1.10.2
  • mpi4py, version 1.3.1 -> 2.0.0
  • numpy, version 1.9.2 -> 1.11.2 also conda version our build is based on the intel MKMKL

New package versions means newer features, bug fixes, but it can also mean old interfaces are deprecated, and new bugs crop in. Some users may need to update their code to run with later versions of packages.

We do not use the latest version of packages that brake psana. For instance, we do not use h5py 2.6.0 since it cannot read the hdf5 files produced by the Translator. We use h5py 2.5.don't work with psana. 

Using conda_setup from a Script

Since When writing a script that will use the conda_setup command to create a conda environment, since conda_setup processes command line arguments, and your script may take command line arguments, a best practice is to do

...

to prevent conda_setup from trying to read the scripts commands line arguments, details as at conda_setup script issue.

Users may prefer to use the '–quiet' option from a script, that is

source conda_setup -q

to avoid conda_setup's messages - these messages can generate a lot of noise for a MPI script run on many cores.

While conda_setup should work from a script, it was designed for interactive use. This comment: bypass conda_setup in script shows how to setup the environment yourself, without using conda_setup.

Red Hat 5, 6, 7 Issues

Most users will run on rhel7 machines - anyone doing analysis on the interactive machines or submitting batch jobs to the usual queues will use rhel7 machines. There are some rhel6 and rhel5 machines in the experiment hutch control rooms. In particular users running from shared memory may have to run on a rhel6 or rhel5 host.

...

The LCLS Data Analysis group is presently maintaining 3 separate conda installations, one for rhel5, rhel6, and rhel7. Locally built packages, like openmpi, hdf5, and psana, will be built natively on each platform. The conda_setup script will automatically detect rhel5 vs rhel6 vs rhel7 and activate an environment in the appropriate installation. Use conda_setup to get into a conda environment to make sure you use the appropriate packages for your host.

However since conda-forge does not support rhel5, certain packages will not function on rhel5 (for example, opencv, which we obtain from conda-forge). In general, our support for rhel5 is more limited with the conda releases than the RPM releases. Rhel5 users may not be able to use a conda environment.

Moreover it is possible that we install a package that is limited to rhel7. Today - As of Jan 2017, the only such package is tensorflow.

...

Presently, packages that can only be installed via pip are not being maintained in added to the central installs, that is we require conda packaging, but see the User Conda Envionments section below.

...

which installs biopythion from the thebiopython conda package that is maintained by anaconda. Note this is different then using pip to install biopython. The nice thing about conda is that you can use pip to install packages in your conda environments. So for install, after creating your environment named snowflake, you could do

...

Supporting softlinks from user home directories to a central package repository is a feature under development at continuum.

Psana and Root

We're not sure if CERN Root will create problems for psana environments, however users who wish to use psana and root can try the following (root is in the non-standard channel NLeSC):

source conda_setup 
conda create --name myroot --clone ana-1.2.0  
source activate myroot
conda install -c NLeSC root  

Updating Psana

To keep your envionment up to date with changes to psana, do

...

this should install the latest version of psana-conda, and its dependencies. Currently we are specifying strict dependencies for certain packages. Installing psana-conda will also install specific versions and builds of the following packagesIt should pick up the following dependencies from the lcls-rhel7 channel:

  • hdf5
  • openmpi
  • mpi4py
  • h5py
  • tables

that we maintain in lcls-rhel7. We can do development to relax this in the future if it is a problem - for instance you may be at a lab that has its own mpi installationTo use your own build of openmpi or hdf5/h5py, install those in myenv first (however note any package version requirements for running psana per the psana meta.yaml.

Before running psana, you will also need to set the environment variables

  • SIT_PSDM_DATA

...

  • SIT_ROOT

...

  • SIT_DATA. 

The  The recommended way to do this is to have them set when your environment is activated, and unset when it is deactivated. Conda provides a mechanism to do this discussed here: saved-environment-variables. The complicated piece is that SIT_DATA must include the sub-directory 'data' to your conda environment, as well as where the experiment-db.dat file is. For instance, with a conda environment like ana-1.0.8, the files

...

 

Here is an example

In [1]: import anarelinfo
In [2]: anarelinfo.version
Out[2]: 'psana-conda-1.0.3'
In [3]: anarelinfo.pkgtags
Out[3]:
{'AppUtils': 'V00-07-00',
 'CSPadPixCoords': 'V00-03-30',
...

 

GPU Work

LCLS has some GPU resources with some software setup for use. See table below

nodeCUDAGPU card(s)RAMCompute
Capability 
notes
psanagpu101     
psanagpu1027.5Tesla K4012 GB3.5

This is the only card we have with a modern enough compute capability for deep learning frameworks that rely on the nvidia cudnn (like tensorflow)

psanagpu103     
      

We are still developing infrastructure and configuration for these nodes, but presently, if one does

ssh psanagpu102
source conda_setup --dev --gpu

then you will be activating a python 2.7 conda environment for working with the GPU. It is mostly the same as the main environment with psana, but has these differences:

  • includes the nvidia cudnn for deep learning frameworks like tensorflow
  • adds paths to PATH, LD_LIBRARY_PATH, and CPATH so that you can work with the CUDA installation, and the nvidia cudnn
  • for packages like tensorflow, that are compiled differently to work with the GPU, includes the GPU version of that package rather then the CPU version
    • presently, tensorflow is the only such package that is compiled differently for the GPU - that is all other packages in this environment are the same as the standard psana environment.
    • packages like theano can be dynamically configured to use the GPU, so it is the same package between this gpu and non gpu environment 

Using the cuDNN

Before using the  nvidia cudnn (by working with  tensorflow or keras in the gpu environment, or configuring  theano to use it), register with the NVIDIA Accelerated Computing Development program at this link:

 https://developer.nvidia.com/accelerated-computing-developer

Per the nvidia cuDNN license, we believe all users must register before using it, but don't worry, the nvidia emails (if you opt to receive them) are quite interesting! (smile) 

Shared Resource

Presently, the GPU's are only available through interactive nodes. There is no batch management of them to assign GPU resources to users. Be mindful that other users on a node like psanagpu102 may be using the GPU.

The main issue is that GPU memory can become a scarce resource.

Make use of the command

nvidia-smi

to see what other processes are on the gpu and how much memory they are using. Use 

top

to identify the names of other users and communicate with them, or us, to manage multi-use issues.

Limit GPU Card Use

If you are on a node with more than one GPU card, you can use cuda environment variables to restrict any CUDA based program, to only see a few of the GPU cards. For example, if there are two cards, they will be numbered 0 and 1 by CUDA. You could do 

export CUDA_VISIBLE_DEVICES=1

and any command you run will only see that one GPU. Likewise, to just start one process with a limited view, do

CUDA_VISIBLE_DEVICES=1 ipython

will start an interactive ipython session where tensorflow will only see device 1. Tensorflow will call the one device it sees device 0.

 

Tensorflow: Limit GPU Memory on a Card

With tensorflow, you can write your code to only grab the GPU memory that you need:

with tf.device('/gpu:0'): # this with statement may not be necessary
    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    config.allow_soft_placement=True
    with tf.Session(config=config) as sess:
         # now your program, all variables will default to going on the GPU, and
         # any that shouldn't go on a GPU will be put on the CPU.

 

Configuration Subject to Change

At this point there are very few people using the GPU and the configuration of GPU support is subject to change. Presently the gpu conda environment is only built in the development rhel7 conda installation (thus the --dev switch for conda_setup above).