Skip to end of metadata
Go to start of metadata

This page describes smalldata_tools, a suite of code useful for analysis from the xtc data to small(er) hdf5 files at several stages of analysis. The code can be found on git at https://github.com/slac-lcls/smalldata_tools.

At XPP or XCS, the code setup is usually taken care off by the beam line staff. For other hutches, please contact the controls POC or pcds-poc-l. The working directory generally are:

/cds/data/psdm/<hutch>/<expname>/results/smalldata_tools 

for the offline system (psana) and

/cds/data/drpsrcf/<hutch>/<expname>/scratch/smalldata_tools 

for the (new) fast feedback system (psffb).

Online and offline analysis

Two analysis infrastructures comprising of various queues and interactive nodes, are available to use depending on the status of the experiment.

Online analysis

Ongoing experiment are generally using the online analysis infrastructure, the fast feedback system (ffb). More info on the system here: Fast Feedback System

This system is faster and provides prioritization to ongoing experiments. Some time after the experiment is over, the access to the data will be locked and only the offline system will be available.

Offline analysis

After the experiment is over, the data and smalldata production code are moved to the offline system, the anafs. This system available for analysis indefinitely and can be used to reprocess or refine the data.

How do I access the computing resources?

ssh -X <ACCOUNT>@pslogin.slac.stanford.edu

If using NoMachine, login to psnxserv.slac.stanford.edu

Then, for the offline analysis:

ssh -X psana
source /reg/g/psdm/etc/psconda.sh -py3 # Environment to use psana, etc

or, for the online analysis:

ssh -X psffb
source /reg/g/psdm/etc/psconda.sh -py3 # Environment to use psana, etc

Workflow

The analysis is generally split in two steps, allowing for easy diagnostics and customization of the analysis process. Please contact your controls and data POC to assess the best approach for your experiment.

  • The first step is the generation of the "small data" file, the colloquial name for run-based hdf5 files which contain different data arrays where the first dimension is the number of events (shot-to-shot information retained). This production can be run automatically on each new run so that the data is available only a few minutes after the run has ended. It can also be run on request in case you want to tweak the data extraction. Processing of the area detector can be configured at this stage, performing operation such as extracting a region of interest, azimuthal integration, photon counting, etc. It is non-recommended to save full large area detector data at this step.
    The following pages describe this in more details:

    Generation of small hdf5 files

    Configuration of SmallData

    Adding Data from area detectors

  • The second stage depends much more on the type of experiment. Different options are available:
    • Binning of the full detector images can be performed by setting up the cube analysis, which will return a h5 file of binned data and images, resulting in a relatively light-weight file. While the shot-to-shot information is lost at this point, this approach is generally recommended, as it is more carefree and does not require to delve into the details of the binning procedure. It is also almost mandatory in cases where the analysis of the full image is needed (Q-resolved diffused scattering analysis, for example). Note that the shot-to-shot information remains readily available from the file produced in the first step (without the area detector data).
      Details on the cube workflow are given here: Cube production 
    • Adapt one of the templated analysis notebooks to suit the current experiment needs. These custom templates have been made for the more common experiments performed at different endstations at LCLS and are available at /reg/g/psdm/sw/tools/smalldata_tools/example_notebooks (please refrain from modifying these released notebooks in place). This approach works well for lightweight data analysis, for which the area detector images are reduced to a single (or few) number (integration of a ROI, azimuthal binning, for example) in the first step. It is also suited when detailed shot-to-shot information needs to be examined, and full control over the data binning process is desired.
      Documentation on the example notebooks can be found here: Example notebooks.

The contents of the smallData files are described here

smallData Contents

Access analysis codes and folder from Jupyter lab

In Jupyter hub, you can only navigate within your home folder. It is thus recommended to create shortcuts (soft-links) to the relevant experiment folders, for ease-of-access.

From Jupyter hub, click on the "+" symbol on the top left. Select "terminal" and make a soft-link to the experiment folder:

ln -s /cds/data/psdm/<hutch>/<experiment>/ ./<link>

If the experiment is going to make use of the FFB, make a second soft-link:

ln -s /cds/data/drpsrcf/<hutch>/<experiment>/ ./<link>

Access data

The data will be default be written to:

for the FFB processing and 

/cds/data/drpsrcf/<hutch>/<experiment>/scratch/hdf5/smalldata

for the FFB processing and 

/cds/data/psdm/<hutch>/<experiment>/hdf5/smalldata

for the processing using the 'SLAC' endpoint/ the psana system. Data will be moved from the FFB system to this directory within 1-2 weeks after the experiment has ended.





  • No labels