Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Include Page
PSDM:PageMenuBegin
PSDM:PageMenuBegin
 
Table of Contents
 
Include Page
PSDM:PageMenuEnd
PSDM:PageMenuEnd

Infrastructure

Problems accessing data, or data seems to have disappeared

Two things to check:

  1. Have you been given access to view the data?
  2. Has the data been removed due to the data retention policy?

...

If the directory is visible but run 68 is not there, it maybe that the data was removed due to the Data Retention Policy Version 2. The data is still available on disk and can be restored using the Web Portal of Experiments.

If the xtc directory is not visible, make sure you are running on a node that can see the data (i.e, you are on a psana node, rather than a psdev or pslogin node). If it is still not visible, email pcds-help@slac.stanford.edu.

How do I use the LCLS batch farm?

Follow the instructions here: Submitting Batch Jobs

How do I keep programs running if a ssh connection fails

See if you can use the LSF batch nodes for your work. If not, see if nomachine technology will work. Otherwise, there are three unix programs to help with this are tmux, nohup and screen. None of these programs will preserve a graphical program, or a X11 connection, so run your programs in terminal mode.

...

Here we are capturing the output of the program in myoutput, along with anything it writes to stderr (the 2>&1), then putting it in the background. The job will persist after you logout. You can take a look at the output in the file myoutput the next day. As with tmux you will need to remember the node you launched nohup on.

Why did my batch job

...

fail? I'm getting 'command not found'

Before running your script, make sure you can run something, for instance do

...

Check that myscript is executable by yourself, check that you have the correct #! line to start the script.

sit_setup fails in script using ssh

Users have run into issues in the following scenario

...

Typically option 1) works best.

Psana

Topics specific to Psana

Where is my epics variable?

Make sure it is a epics variable - it may be a control monitor variable. An easy way to see what is in the file is to use psana modules that dump data.  For instance:

...

will almost always show what control variables are defined. It defaults to use the standard Source "ProcInfo()" for control data. It is possible (though very unlikely) for control data to come from a different source. One can use the EventKeys module to see all Source's present, and then specify the source for  DumpControl through a config file.

How do I access data inside a Psana class?

Look for an example in the psana_examples package that dumps this class. There should be both a C++ and Python module that dumps data for the class.

How do I find out the experiment number (expNum) or experiment?

Psana stores both the experiment and expNum in the environment object - Env that Modules are passed, or that one obtains from the DataSource in interactive Psana. See Interactive Analysis document and the C++ reference for Psana::Env

Why isn't there anything in the Psana Event?

This may be due to a problem in the DAQ software that was used during the experiment. The DAQ software may have incorrectly been setting the L3 trim flag. This flag is supposed to be set for events that should not processed (perhaps they did not meet a scientific criteria involving beam energy). When the flag is set, there should be very little in the xtc datagram - save perhaps epics updates. Psana (as of release ana-0.10.2 from October 2013) will by default not deliver these events to the user. The bug is that the flag was set when there was valid data. To force psana to look inside datagram's where L3T was set, use the option l3t-accept-only. To use this option from the command line do:

psana -o psana.l3t-accept-only=0 ...

Or you can add the option to your psana configuration file (if you are using one):

[psana]
l3t-accept-only=0

It seems that for as much as 5% of the time, CsPad DataV2 is not in the Event

The only distinction between CsPad DataV1 and DataV2 is the sparsification of particular sections as given in the configuration object. That is DataV2 may be sparser. The actual hardware is kicking out DataV1 but the DAQ event builder is making a DataV2 when it can. Sometimes the DAQ sends the original DataV1 instead of the DataV2. This can be due to limited resources, in particular competition with resources required for compressing cspad in the xtc files. If you do not find a DataV2 in the Event, look for a DataV2

How do I set psana verbosity from the config file?

Most all psana options can be set from both the config file as well as the command line. Unfortunately verbosity cannot be set from a config file. That is

...

turns on trace or debug level MsgLog messages. The above examples were tested with the bash shell. For full details on configuring the MsgLogger through the MSGLOGCONFIG environment variables, see https://pswww.slac.stanford.edu/swdoc/releases/ana-current/doxy-all/html/group__MsgLogger.html

Strange Results after doing Math with data in Python

One thing to bear in mind with the data in the Python interface, the detector data is almost always returns as a numpy array of an integral type. For example, after getting a waveform of Acqiris data, if you were to print it out during an interactive Python session, you might see

...

waveform = waveform.astype(np.float)

Specialized Psana Options, liveTimeOut, firstControlStream, etc

There are a number of specialized options in psana that we do not generally document in the main user documentation. These are options that engineers, points of contact (POC's), or possibly users may need to deal with special circumstances. These are described in the table below:

package.module.option or psana.optiondata typeoption description

PSXtcInput.XtcInputModule.liveTimeout

integerlive time out in seconds run running in live mode (defaults to 120)
PSXtcInput.XtcInputModule.runLiveTimeoutintegerstarting in ana-0.13.17, live time out when waiting for a new run (defaults to 0)
psana.first_control_streamintegerwhich stream starts the control or IOC streams (xtcav, etc) defaults to 80
psana.l3t-accept-onlyboolwhen true, checks for trimmed datagrams. If the datagram is trimmed, it does not trigger an event for the user. Trimmed datagrams should be empty. There was a case where trimming was not set up properly, and we needed to turn this off to see the data. Note - this is different then the l3t pass/fail that POC's may setup - though one does not expect trimmed data unless the l3t is a fail.
psana.allow-corrupt-epicsboolfor data from before approximately July 2014 when the DAQ epics archiver recorded different variables at different rates, the index files do not contain enough information to properly get access the correct epics values. When this happens, and the user is running in index mode, psana will exit with a fatal error. Setting this flag to True will allow psana to analyze these runs (in index mode) but some of the epics data could be corrupted.

 

 

Hdf5

Topics specific to hdf5

Why is there both CsPad DataV2 and CsPad DataV1 in the translation?

The only distinction between CsPad DataV1 and DataV2 is the sparsification of particular sections as given in the configuration object. That is DataV2 may be sparser. The actual hardware is kicking out DataV1 but the DAQ event builder is making a DataV2 when it can. Sometimes the DAQ sends the original DataV1 instead of the DataV2. This can be due to limited resources, in particular competition with resources required for compressing cspad in the xtc files.

How do I write hdf5 files from C++ or Python

Python:

From Python we recommend h5py. For interactive Python, an example is found at Using h5py to Save Data.
For Python you can also use pytables. This is installed in the analysis release. Do

...

Psana Modules - Using the Translator:

The Psana ddl based Translator can be used to write ndarrays, strings and a few simple types that C++ modules register. These will be organized in the same groups that we use to translate xtc to hdf5. Datasets with event times will be written as well. To use this, create a psana config file that turns off the translation of all xtc types but allows translation of ndarrays and strings. An example cfg file is here: psana_translate_noxtc.cfg You would just change the modules and files parameters for psana and the output_file parameter to Translator.H5Output. Load modules before the translator that put ndarrays into the event store. The Translator will pick them up and write them to the hdf5 file

Chunks, compression, and why is my random access algorithm so slow?

Be default, Hdf5 files are translated in compressed chunks. The compression (standard gzip, with deflate=1 (the range in [0,9])) reduces file size by about 50%. The chunk size varies with the data type. The chunk policy focuses on not having too many chunks, as we believe this degrades performance. Typically we shoot for chunks of 16MB, however for full cspad, this is only 4 objects per chunk - so we default to using 22 objects per chunk - or 100MB chunks. This is fine for a program that linearly reads through a hdf5 file, or a parallel program that divides the file into sections - i.e, start, middle, end, but it is not optimal for a random access analysis.If you read one cspad, you read the other 21 in its chunk, and decompressed the whole 100MB chunk.

...

There are options with translation to control the objects per chunk. One can also turn off compression when translating.

TimeTool

The TimeTool results can be obtained in one of two ways depending on the experimental setup. The first is directly during data acquisition, the second is during offline analysis using the psana module TimeTool.Analyze. Regrading data acquisition, for data recorded prior to Oct 13, 2014, the timetool results were always recorded as EPICS PV's. After Oct 13, 2014, they are recorded in their own data type: TimeTool::DataV*. The version increments as the time tool develops. For data recorded up to around November 2014, this was DataV1. Around December of 2014 it changed to DataV2. Regarding offline analysis with the TimeTool.Analyze psana module, similarly to the experiment data files, this module puts a TimeTool::DataV* object in the event store depending on the version of the software. Please see the documentation on the TimeTool in the  psana - the Psana Module Catalog .- TimeTool Package

 

 

...

An error occurred during authentication

This is a common error when a user has not obtained an AFS token. You can obtain a token with:

kinit