You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 47 Next »

Unable to render {include} The included page could not be found.
 
 
Unable to render {include} The included page could not be found.

Infrastructure

Problems accessing data, or data seems to have disappeared

Two things to check:

  1. Have you been given access to view the data?
  2. Has the data been removed due to the data retention policy?

For the first, new users to an experiment need to ask the experiment POC to add them to the experiment. After this is done, you must log out and log back in for the change to take affect.

For the second, when analysis code that used to work stops working, check to see if the xtc is visible. For example, if you are analyzing run 68 of xpptut13, take a look in the xtc directory for that experiment, i.e:

ls /reg/d/psdm/xpp/xpptut13/xtc

If the directory is visible but run 68 is not there, it maybe that the data was removed due to the Data Retention Policy. The data is still available on disk and can be restored using the Web Portal of Experiments.

If the xtc directory is not visible, make sure you are running on a node that can see the data (i.e, you are on a psana node, rather than a psdev or pslogin node). If it is still not visible, email pcds-help@slac.stanford.edu.

How do I use the LCLS batch farm?

Follow the instructions here: Submitting Batch Jobs

How do I keep programs running if a ssh connection fails

See if you can use the LSF batch nodes for your work. If not, three unix programs to help with this are tmux, nohup and screen. None of these programs will preserve a graphical program, or a X11 connection, so run your programs in terminal mode.

tmux

For example, with tmux, if one does


ssh psexport
ssh psana
# suppose we have landed on psanacs040 and that there is a matlab license here
tmux
matlab --nosplash --nodesktop

If you lose the connection to psanacs040, you can go back to that node and reattach:

ssh psexport
ssh psanacs040
tmux attach

You need to remember the node you ran tmux on. If you are running matlab, you can run the matlab license script with the --show-users parameter to see where you are running it:

/reg/common/package/scripts/matlic  --show-users

nohup

You could run a batch process with nohup (no hangup) as follows

    nohup myprogram

For example, suppose we want to run a Python script that prints to the screen and save its output (the below syntax is for the bash shell):

nohup python myscript.py > myoutput 2>&1 &

Here we are capturing the output of the program in myoutput, along with anything it writes to stderr (the 2>&1), then putting it in the background. The job will persist after you logout. You can take a look at the output in the file myoutput the next day. As with tmux you will need to remember the node you launched nohup on.

Why did my batch job failed? I'm getting 'command not found'

Before running your script, make sure you can run something, for instance do

  bsub -q psnehq pwd

(substitute the appropriate queue for psnehq). If you created a script and are running

  bsub -q psnehq myscript

Then it maybe that the current directory is not in your path, run

  bsub -q psnehq ./myscript

Check that myscript is executable by yourself, check that you have the correct #! line to start the script.

sit_setup fails in script using ssh

Users have run into issues in the following scenario

  • writing a script that uses ssh to get to another node
  • uses sit_setup to set up the environment for that node (in the script)

The issue is that sit_setup is an alias (defined by /reg/g/psdm/etc/ana_env.sh). In bash aliases are not expanded by default in non-interactive shells so you have two options:

  1. '. /reg/g/psdm/bin/sit_setup.sh' instead of sit_setup
  2.  change the shell behavior by saying 'shopt expand_aliases'

Typically option 1) works best.

Psana

Topics specific to Psana

Where is my epics variable?

Make sure it is a epics variable - it may be a control monitor variable. An easy way to see what is in the file is to use psana modules that dump data.  For instance:

  psana -m psana_examples.DumpEpics exp=cxitut13:run=0022

will show what epics variables are defined. Likewise

  psana -m psana_examples.DumpControl exp=xpptut13:run=0179

will almost always show what control variables are defined. It defaults to use the standard Source "ProcInfo()" for control data. It is possible (though very unlikely) for control data to come from a different source. One can use the EventKeys module to see all Source's present, and then specify the source for  DumpControl through a config file.

How do I access data inside a Psana class?

Look for an example in the psana_examples package that dumps this class. There should be both a C++ and Python module that dumps data for the class.

How do I find out the experiment number (expNum) or experiment?

Psana stores both the experiment and expNum in the environment object - Env that Modules are passed, or that one obtains from the DataSource in interactive Psana. See Interactive Analysis document and the C++ reference for Psana::Env

Why isn't there anything in the Psana Event?

This may be due to a problem in the DAQ software that was used during the experiment. The DAQ software may have incorrectly been setting the L3 trim flag. This flag is supposed to be set for events that should not processed (perhaps they did not meet a scientific criteria involving beam energy). When the flag is set, there should be very little in the xtc datagram - save perhaps epics updates. Psana (as of release ana-0.10.2 from October 2013) will by default not deliver these events to the user. The bug is that the flag was set when there was valid data. To force psana to look inside datagram's where L3T was set, use the option l3t-accept-only. To use this option from the command line do:

psana -o psana.l3t-accept-only=0 ...

Or you can add the option to your psana configuration file (if you are using one):

[psana]
l3t-accept-only=0

It seems that for as much as 5% of the time, CsPad DataV2 is not in the Event

The only distinction between CsPad DataV1 and DataV2 is the sparsification of particular sections as given in the configuration object. That is DataV2 may be sparser. The actual hardware is kicking out DataV1 but the DAQ event builder is making a DataV2 when it can. Sometimes the DAQ sends the original DataV1 instead of the DataV2. This can be due to limited resources, in particular competition with resources required for compressing cspad in the xtc files. If you do not find a DataV2 in the Event, look for a DataV2

How do I set psana verbosity from the config file?

Most all psana options can be set from both the config file as well as the command line. Unfortunately verbosity cannot be set from a config file. That is

psana -v -v ...

has no config file equivalent. This is because verbosity is a function of the Message service that psana is a client of, rather than server for. Unfortunately this makes it difficult to turn on debugging messages from within a python script that is run as

python myscript.py

However one can configure the message logging service through the MSGLOGCONFIG environment variable. In particular

MSGLOGCONFIG=trace python myscript.py

or

MSGLOGCONFIG=debug python myscript.py

turns on trace or debug level MsgLog messages. The above examples were tested with the bash shell. For full details on configuring the MsgLogger through the MSGLOGCONFIG environment variables, see https://pswww.slac.stanford.edu/swdoc/releases/ana-current/doxy-all/html/group__MsgLogger.html

Strange Results after doing Math with data in Python

One thing to bear in mind with the data in the Python interface, the detector data is almost always returns as a numpy array of an integral type. For example, after getting a waveform of Acqiris data, if you were to print it out during an interactive Python session, you might see

waveform
array([234,53,5,...,324], dtype=np.int16)

Note the dtype, this data is two byte signed integers. In particular if a value of the waveform is over 32000 and you add 10000 to it, those values will wrap around and become negative. It may be a good idea, before doing any math with the data, to convert it to floats:

waveform = waveform.astype(np.float)

Hdf5

Topics specific to hdf5

Why is there both CsPad DataV2 and CsPad DataV1 in the translation?

The only distinction between CsPad DataV1 and DataV2 is the sparsification of particular sections as given in the configuration object. That is DataV2 may be sparser. The actual hardware is kicking out DataV1 but the DAQ event builder is making a DataV2 when it can. Sometimes the DAQ sends the original DataV1 instead of the DataV2. This can be due to limited resources, in particular competition with resources required for compressing cspad in the xtc files.

How do I write hdf5 files from C++ or Python

Python:

From Python we recommend h5py. For interactive Python, an example is found at Using h5py to Save Data.
For Python you can also use pytables. This is installed in the analysis release. Do

import tables

In your Python code.

C++

If developing a Psana module to process xtc, consider splitting your module into a C++ module which puts ndarrays in the event store, and a Python module which retrieves them and writes the hdf5 file using h5py.
You can also work with the C interface to hdf5.  hdf5 is installed as a package in the analysis release. From your C++ code, do

#include "hdf5/hdf5.h"

A tip for learning hdf5 is to run example programs from an 'app' subdirectory of your package. For example, if you create an analysis release and a package for yourself, create an app subdirectory to that package and put an example file there:

~/myrelease/mypackage/app/hdf5_example.c

Now run 'scons' from the ~/myrelease directory, and then run hdf5_example.

Psana Modules - Using the Translator:

The Psana ddl based Translator can be used to write ndarrays, strings and a few simple types that C++ modules register. These will be organized in the same groups that we use to translate xtc to hdf5. Datasets with event times will be written as well. To use this, create a psana config file that turns off the translation of all xtc types but allows translation of ndarrays and strings. An example cfg file is here: psana_translate_noxtc.cfg You would just change the modules and files parameters for psana and the output_file parameter to Translator.H5Output. Load modules before the translator that put ndarrays into the event store. The Translator will pick them up and write them to the hdf5 file

TimeTool

Here we cover topics specific to the offline TimeTool module. The TimeTool results can be obtained in one of two ways depending on the experimental setup. The first is directly during data acquisition. For data recorded prior to Oct 13, 2014, the timetool results were always recorded as EPICS PV's. After Oct 13, 2014, they are still recorded as EPICS PV's, but also recorded in their own data type: TimeTool::DataV1. The latter is the preferred way to access the data, EPICS PV's are provided for backward compatibility. The second is during offline analysis using the psana module TimeTool.Analyze. Similarly, this module puts a TimeTool::DataV1 object in the event store. Previously, it put a collection of floats, or ndarrays in the event store (for backward compatibility, it still puts the floats in the event store). Documentation on the psana TimeTool modules can be found in the psana - Module Catalog.

The case studies below reference code in the TimeToolExamples package from the svn users repository at SLAC. The simplest way to work through these case studies is to add this package to a test release and build/run the code - potentially modifiying it to run on the experimental data that you are have access to. If you are not at SLAC, downloads of the code are provides below.

Assuming you are at SLAC, start by making a test release:

newrel ana-current myrel
cd myrel
addpkg -u TimeToolExamples V00-00-05
scons

If you have any trouble with the addpkg -u command, you may need to do

kinit

to get a kerberos ticket to access the users svn repository. This tag (V00-00-05) has been tested with ana-0.13.4 and should work with later releases. This is the release when the TimeTool.Analyze package started to use the TimeTool::DataV1 object.

The package includes scripts in the TimeToolExamples/app subdirectory. It is intended that users will modify these scripts for their purposes.

*IMPORTANT* after modifying a script in the app sub-directory of a package, you *MUST* run scons. This installs a copy of the script (with a modification to the #! line) in the myrel/arch directory. The file in myrel/arch is what is actually run.

SXR case study: EVR BYKICK signals no beam

This is the most straightforward way to run the TimeTool and, I believe, the expected way it is run in SXR. The laser is always on, but the beam is not present when the evr bykick code is present. The TimeTool.Analyze module will build up a reference during the BYKICK shots, and compute results for all other shots. An example script which executes this scenario can be found in

TimeToolExamples/app/tt_sxr_bykick

After installing the package in a test release as described above and running scons, do

tt_sxr_bykick -h

to get help from the script. The script was developed/tested against this data:

tt_sxr_bykick -e sxri0214 -r 158 -n 100

The -n option is a testing option that limits the number of events to the first 100. The script generated a log plot of the image and plots the time tool pixel position result over the plot.

One can read the code for the script here:

tt_sxr_bykick code
#!@PYTHON@
__doc__='''
Runs the TimeTool module on a given run and experiment. Uses BYKICK evr code to
identify reference shots (nobeam, but laser present). Plots results.
'''
import argparse
import sys
import psana
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
###########################
# CONFIGURAITON:
#
# psana options for main() function below. This
# has the same information as a psana config file, however we include it in the
# script so we can be sure to use the same values in our Python code. This is 
# basically a transcription of the TimeTool/data/sxr_timetool.cfg file into the
# Python dictionary that we can use to set psana options from a script as opposed to
# a config file.
EVR_BYKICK = 162
psanaOptions = {
    ########## psana configuration #################
    'psana.modules':'TimeTool.Analyze',
    ########## TimeTool.Analyze configuration #######
    #  Key for fetching timetool camera image
    'TimeTool.Analyze.get_key':'TSS_OPAL',
    #  Results are written to <put_key>
    'TimeTool.Analyze.put_key':'TTANA',
    #  Indicate absence of beam for updating reference
    'TimeTool.Analyze.eventcode_nobeam':EVR_BYKICK,
    #  Indicate events to skip (no laser, for example)
    'TimeTool.Analyze.eventcode_skip':0,
    #  Polynomial coefficients for position_time calculation
    'TimeTool.Analyze.calib_poly':'0 1 0',
    #  Project onto X axis?
    'TimeTool.Analyze.projectX':True,
    #  Minimum required bin value of projected data
    'TimeTool.Analyze.proj_cut':0,
    #  ROI (x) for signal
    'TimeTool.Analyze.sig_roi_x':'0 1023',
    #  ROI (y) for signal
    'TimeTool.Analyze.sig_roi_y':'408 920',
    #  ROI (x) for sideband
    'TimeTool.Analyze.sb_roi_x':'' ,
    #  ROI (y) for sideband
    'TimeTool.Analyze.sb_roi_y':'', 
    #  Rolling average convergence factor (1/Nevents)
    'TimeTool.Analyze.sb_avg_fraction':0.05,
    #  Rolling average convergence factor (1/Nevents)
    'TimeTool.Analyze.ref_avg_fraction':1.0,
    #  Read weights from a text file
    'TimeTool.Analyze.weights_file':'',
    #  Indicate presence of beam from IpmFexV1::sum() [monochromator]
    'TimeTool.Analyze.ipm_get_key':'',
    #           'TimeTool.Analyze.ipm_beam_threshold':'',
    #  Load initial reference from file
    'TimeTool.Analyze.ref_load':'',
    #  Save final reference to file
    'TimeTool.Analyze.ref_store':'timetool.ref',
    #  Generate histograms for initial events, dumped to root file
    'TimeTool.Analyze.dump':20,
    #  Filter weights
    'TimeTool.Analyze.weights':'0.00940119 -0.00359135 -0.01681714 -0.03046231 -0.04553042 -0.06090473 -0.07645332 -0.09188818 -0.10765874 -0.1158105  -0.10755824 -0.09916765 -0.09032289 -0.08058788 -0.0705904  -0.06022352 -0.05040479 -0.04144206 -0.03426838 -0.02688114 -0.0215419  -0.01685951 -0.01215143 -0.00853327 -0.00563934 -0.00109415  0.00262359  0.00584445  0.00910484  0.01416929  0.0184887   0.02284319  0.02976289  0.03677404  0.04431778  0.05415214  0.06436626  0.07429347  0.08364909  0.09269116  0.10163601  0.10940983  0.10899065  0.10079016  0.08416471  0.06855799  0.05286105  0.03735241  0.02294275  0.00853613',
}
def hatLogTransform(X, preLogOffset=5.0):
    '''For plotting, we transform X to be 
    log(b+X)-log(b)       for the non-negative values of X and 
    -(log(|-b+X|)-log(b)) for the negative values of X
    where b is preLogOffset above. Setting b to 1.0 (min)  emphasizs values between -1 and 1.
    '''
    assert preLogOffset >= 1.0
    nonNegative = X >= 0.0
    negative = np.logical_not(nonNegative)
    X[nonNegative] = np.log(preLogOffset+X[nonNegative]) - np.log(preLogOffset)
    X[negative] = -(np.log(np.abs(-preLogOffset+X[negative])) - np.log(preLogOffset))
      
def plotEvent(camera, bkg, position_pixel, sig_roi_x, sig_roi_y, preLogOffset, stopAfterPlot, plot_offset_x, plotTitle=''):
    '''Creates and clears figure 1. plots hatLogTransform(camera-bkg, preLogOffset). Overlays with a 
    vertical line at position_pixel + plot_offset.
    Optional parameters sig_roi_x, sig_roi_y are for bounding box (where position_pixel was
    computed from) and interactive allows user to hit enter to keep plot on screen.
    '''
    opalArr = camera - bkg
    hatLogTransform(opalArr, preLogOffset)
    plt.figure(1)
    plt.clf()
    plt.imshow(opalArr)
    plt.hold(True)
    # plot bounding box where signal computed from
    x1,x2 = sig_roi_x
    y1,y2 = sig_roi_y
    plt.plot([x1, x2, x2, x1, x1],
             [y1, y1, y2, y2, y1], '-y', linewidth=2, label='roi')
    # plot pixel poisition, add offset 
    plt.plot([position_pixel + plot_offset_x, 
              position_pixel + plot_offset_x],
             [max(1, y1-10), 
              min(opalArr.shape[1], y2+10)], '-r', linewidth=1, label='timeTool position')
    plt.xlim([0,opalArr.shape[1]])
    plt.ylim([opalArr.shape[0],0])
    plt.legend()
    plt.title(plotTitle)
    plt.draw()
    if stopAfterPlot:
        raw_input("hit enter to continue: ")
def main(experiment, run, numEvents, stopAfterPlot, plotBias, preLogOffset, psanaOptions):
    global EVR_BYKICK
    psana.setOptions(psanaOptions)
    ref_avg_fraction = psanaOptions['TimeTool.Analyze.ref_avg_fraction']
    sig_roi_x = map(int,psanaOptions['TimeTool.Analyze.sig_roi_x'].split())
    sig_roi_y = map(int,psanaOptions['TimeTool.Analyze.sig_roi_y'].split())
    put_key = psanaOptions['TimeTool.Analyze.put_key']
    ds = psana.DataSource('exp=%s:run=%d'%(experiment, run))
    evrSrc = psana.Source('DetInfo(NoDetector.0:Evr.0)')
    opalSrc = psana.Source(psanaOptions['TimeTool.Analyze.get_key'])
    lastByKick = 0
    ourOwnRef = None
    for idx,evt in enumerate(ds.events()):
        evr = evt.get(psana.EvrData.DataV3, evrSrc)
        if evr is None: continue
        evrCodes = [fifoEvent.eventCode() for fifoEvent in evr.fifoEvents()]
        evrCodes.sort()
        frame = evt.get(psana.Camera.FrameV1, opalSrc)
        if frame is None: continue
        timetool = evt.get(psana.TimeTool.DataV1, put_key) 
        if EVR_BYKICK in evrCodes:
            cameraFrame = np.array(frame.data16(),dtype=np.float)
            if ourOwnRef is None:
                ourOwnRef = cameraFrame
            else:
                ourOwnRef = ref_avg_fraction * cameraFrame + (1.0-ref_avg_fraction)*ourOwnRef
            print "event %4d has BYKICK. events since last BYKICK=%4d. evr codes: %s" % (idx, idx-lastByKick, ','.join(map(str,evrCodes)))
            lastByKick=idx
        if timetool is not None:
            ratio_ampl_nxt = timetool.amplitude()/(1e-30+timetool.nxt_amplitude())
            print "event %4d has TIMETOOL. pos_pixel=%7.1f amplitude=%7.1e amplitude/nxt_amplitude=%.4f" % \
                        (idx, timetool.position_pixel(), timetool.amplitude(), ratio_ampl_nxt)
            cameraFrame = np.array(frame.data16(),dtype=np.float)
            plotTitle = 'scaled plot of Opal camera - ref. Both positive and negative values on log scale.'
            plotEvent(cameraFrame, ourOwnRef, timetool.position_pixel(), sig_roi_x, sig_roi_y, 
                      preLogOffset, stopAfterPlot, plotBias, plotTitle=plotTitle)
        if numEvents > 0 and idx > numEvents: 
            break
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description=__doc__)
    parser.add_argument('-e', '--exp', type=str, help="experiment, for example sxri0214", required=True)
    parser.add_argument('-r', '--run', type=int, help="run, for example 158", required=True)
    parser.add_argument('-n', '--events', type=int, help="number of events to process, default (0) means all", default=0)
    parser.add_argument('-b', '--plotbias', type=float, help="amount to add to pixel_position during plots, correct for fixed bias", default=26.0)
    parser.add_argument('-s', '--stop', action='store_true', help="when plotting, stop after each plot and wait for user input", default=False)
    parser.add_argument('-l', '--logoffset', type=float, help="for plotting, how to adjust log plot, log(A+x) where x is this parameter", default=1.0)
    args = parser.parse_args()
    main(experiment = args.exp, 
         run = args.run, 
         numEvents = args.events, 
         stopAfterPlot = args.stop,
         plotBias = args.plotbias, 
         preLogOffset = args.logoffset,
         psanaOptions = psanaOptions)

 

 

SXR case study: reference/signal in different runs

Below we go over a script for generated offline TimeTool data for an sxr experiment. For this experiment, run 144 was done with the beam blocked, and run 150 with the beam on. The laser is always on in both runs.

This requires some configuration of the TimeTool. We run the TimeTool.Analyze module on run 144 - telling it that the beam is always off. Analyze builds up a reference. We further configure Analyze to save the reference. We also save a averaged background image for our own interactive plotting later.

We then run TimeTool.Analyze on run 150. We save the resulting timetool values to an hdf5 file using the psana xtc to hdf5 translator, see The XTC-to-HDF5 Translator for details.

Finally, to check the work, we process run 150 in index mode, see psana - Python Script Analysis Manual for details on indexing mode (random access) to psana events. We load the time tool values from the h5 file and plot them against the opal, after subtracting our background image.

The TimeToolExamples package discusses above includes two files for this case study. The first is the script that designates the sxr experiment data to run on. It is the file TimeToolExamples/app/tt_sxrd5814. You can immediately take a look at the code here: tt_sxrd5814. The second is a library of functions called by the driver script: TimeToolExamples/src/ttlib.py that you can download here: ttlib.py. Instead of the addpkg command above, one can do

newpkg mypkg
mkdir mypkg/src
mkdir mypkg/app

and download the files into

mypkg/app/tt_sxrd5814
mypkg/src/ttlib.py

and run

scons

to build the code. The run the script, from the release directory do

tt_sxrd5814

Note, users of other experiments will not be able to access the data from this experiment. You will want to modify TimeToolExamples/app/tt_sxrd5814 to read your own data. Note the warning above about running scons after modifying a script in the app subdirectory.

After running the script as is, it should produce the files

ttref_sxrd5814_r0144.txt       # reference that TimeTool.Analayze produced
ttref_sxrd5814_r0144.npy       # our own background reference for plotting
tt_sxrd5814_r0150.h5           # the h5 file with the timetool results for r150


This script builds a background from the first 300 events in run 144,
and processes the first 500 events in run 150.

A recursive listing of the h5 file should show, among other groups:

h5ls -r tt_sxrd5814_r0150.h5 
/Configure:0000/Run:0000/CalibCycle:0000/TimeTool::DataV1/noSrc__TTANA/data Dataset {500/Inf}
/Configure:0000/Run:0000/CalibCycle:0000/TimeTool::DataV1/noSrc__TTANA/time Dataset {500/Inf}


The data dataset is the TimeTool::DataV1 objects. If you get a verbose listing of this dataset, you'll see something like:

psanacs058:~/rel/TimeTool3 $ h5ls -v tt_sxrd5814_r0150.h5/Configure:0000/Run:0000/CalibCycle:0000/TimeTool::DataV1/noSrc__TTANA/data
data                     Dataset {500/Inf}
    Location:  1:8396
    Links:     1
    Chunks:    {2048} 114688 bytes
    Storage:   28000 logical bytes, 114688 allocated bytes, 24.41% utilization
    Type:      struct {
                   "event_type"       +0    enum native unsigned int {
                       Dark             = 0
                       Reference        = 1
                       Signal           = 2
                   }
                   "amplitude"        +8    native double
                   "position_pixel"   +16   native double
                   "position_time"    +24   native double
                   "position_fwhm"    +32   native double
                   "ref_amplitude"    +40   native double
                   "nxt_amplitude"    +48   native double
               } 56 bytes

This shows you the six fields with the TimeTool Data, and also the enum values for the first field that identifies the event type.

The time dataset stores the event id's for the data - the seconds, nanoseconds and fiducials.

After running the script once, the second time the script is run, it will produce
interactive plots.

  • No labels