Psana
Topics specific to Psana
Where is my epics variable?
Make sure it is a epics variable - it may be a control monitor variable. An easy way to see what is in the file is to use psana modules that dump data. For instance:
psana -m psana_examples.DumpEpics exp=cxitut13:run=0022
will show what epics variables are defined. Likewise
psana -m psana_examples.DumpControl exp=xpptut13:run=0179
will almost always show what control variables are defined. It defaults to use the standard Source "ProcInfo()" for control data. It is possible (though very unlikely) for control data to come from a different source. One can use the EventKeys module to see all Source's present, and then specify the source for DumpControl through a config file.
How do I access data inside a Psana class?
Look for an example in the psana_examples package that dumps this class. There should be both a C++ and Python module that dumps data for the class.
How do I find out the experiment number (expNum) or experiment?
Psana stores both the experiment and expNum in the environment object - Env that Modules are passed, or that one obtains from the DataSource in interactive Psana. See Interactive Analysis document and the C++ reference for Psana::Env
Why isn't there anything in the Psana Event?
This may be due to a problem in the DAQ software that was used during the experiment. For some experiments collected around Oct 2013 to Jan 2014, the DAQ software may have incorrectly been setting the L3 trim flag. This flag is supposed to be set for events that should not processed (perhaps they did not meet a scientific criteria involving beam energy or a DAQ criteria). When the flag is set, there should be very little in the xtc datagram - save perhaps epics updates. Psana (as of release ana-0.10.2) will by default not deliver these events to the user. The bug is that the flag was set when there was valid data. To force psana to look inside datagram's where L3T was set, use the option l3t-accept-only, from the command line do:
psana -o psana.l3t-accept-only=0 ...
Or you can add the option to your psana configuration file (if you are using one):
[psana]
l3t-accept-only=0
Hdf5
Topics specific to hdf5
How do I write hdf5 files from C++ or Python
Python:
From Python we recommend h5py. For interactive Python, an example is found at Using h5py to Save Data.
For Python you can also use pytables. This is installed in the analysis release. Do
import tables
In your Python code.
C++
If developing a Psana module to process xtc, consider splitting your module into a C++ module which puts ndarrays in the event store, and a Python module which retrieves them and writes the hdf5 file using h5py.
You can also work with the C interface to hdf5. hdf5 is installed as a package in the analysis release. From your C++ code, do
#include "hdf5/hdf5.h"
A tip for learning hdf5 is to run example programs from an 'app' subdirectory of your package. For example, if you create an analysis release and a package for yourself, create an app subdirectory to that package and put an example file there:
~/myrelease/mypackage/app/hdf5_example.c
Now run 'scons' from the ~/myrelease directory, and then run hdf5_example.
Psana Modules - Using the Translator:
The Psana ddl based Translator can be used to write ndarrays, strings and a few simple types that C++ modules register. These will be organized in the same groups that we use to translate xtc to hdf5. Datasets with event times will be written as well. To use this, create a psana config file that turns off the translation of all xtc types but allows translation of ndarrays and strings. An example cfg file is here: psana_translate_noxtc.cfg You would just change the modules and files parameters for psana and the output_file parameter to Translator.H5Output. Load modules before the translator that put ndarrays into the event store. The Translator will pick them up and write them to the hdf5 file
How do I keep programs running if a ssh connection fails
See if you can use the LSF batch nodes for your work. If not, three unix programs to help with this are tmux, nohup and screen. None of these programs will preserve a graphical program, or a X11 connection, so run your programs in terminal mode.
tmux
For example, with tmux, if one does
ssh psexport
ssh psana
# suppose we have landed on psanacs040 and that there is a matlab license here
tmux
matlab --nosplash --nodesktop
If you lose the connection to psanacs040, you can go back to that node and reattach:
ssh psexport
ssh psanacs040
tmux attach
You need to remember the node you ran tmux on. If you are running matlab, you can run the matlab license script with the --show-users parameter to see where you are running it:
/reg/common/package/scripts/matlic --show-users
nohup
You could run a batch process with nohup (no hangup) as follows
nohup myprogram
For example, suppose we want to run a Python script that prints to the screen and save its output (the below syntax is for the bash shell):
nohup python myscript.py > myoutput 2>&1 &
Here we are capturing the output of the program in myoutput, along with anything it writes to stderr (the 2>&1), then putting it in the background. The job will persist after you logout. You can take a look at the output in the file myoutput the next day. As with tmux you will need to remember the node you launched nohup on.
Why did my batch job failed? I'm getting 'command not found'
Before running your script, make sure you can run something, for instance do
bsub -q psnehq pwd
(substitute the appropriate queue for psnehq). If you created a script and are running
bsub -q psnehq myscript
Then it maybe that the current directory is not in your path, run
bsub -q psnehq ./myscript
Check that myscript is executable by yourself, check that you have the correct #! line to start the script.