You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Access



source /sdf/group/lcls/ds/ana/sw/conda1/manage/bin/psconda.sh 


source /sdf/group/lcls/ds/ana/sw/conda2/manage/bin/psconda.sh


Dan recommends running offline_ami in a non-conda session.

/sdf/group/lcls/ds/daq/current/build/pdsapp/bin/x86_64-rhel7-opt/configdb_readxtc -e ../../xtc/

/sdf/group/lcls/ds/daq/ami-current/build/ami/bin/x86_64-rhel7-opt/offline_ami -p /sdf/data/lcls/ds/cxi/cxic00121/xtc


Advice on running (large memory jobs) in batch:

For large memory jobs:

You can ask for up to 480G on a single milano node -- which is equivalent to asking for exclusive use of that node - so the more mem you request, the longer it may take to schedule the job.

(In our case, run e.g. du -h on the h5 file(s) that AnalysisH5.py will load.  Then add some (2? - no quantization) GB to be safe.  I'm told small adjustments will likely not affect scheduling time much or at all.)

One can do e.g.

sacct -j 52763825 -o jobid,jobname,partition,user,account%18,maxvmsize,avevmsize,maxrss,averss,maxpages,reqtres%36

after the job runs to see how much memory was actually used.

lcls:default is being phased out. You should preferably use an account that is appropriate for your project/exp/task/analysis.  To find out what you have access to, do

sacctmgr show associations user=philiph

For those of us in ps-data (do groups to see) you can just user the experiment group lcls:cxic00121, as you are in ps-data you are a member of all experiment groups.

Sample command using the above: sbatch -p milano --account lcls:cxic00121 --mem 21G --wrap="python AnalyzeH5.py -e cxic00121 -r 111 -f /sdf/data/lcls/ds/cxi/cxic00121/results/offsetCalibration//lowFlux//SimpleClusters_129by129batchtest_c0_r111_n666.h5 ..."



  • No labels