Use cases

List of use cases

Requirements

Requirements for data acquisition and tagging of runs

  1. Need to be able to mark runs as "dark runs" before and after acquisition

    To be decided

    • DAQ needs to add a checkbox somewhere to indicate that this run is a "dark run", then tag the run with the appropriate "Data Type"
    • Need a similar checkbox in the e-log to be able to tag a run a "dark run" after it's been taken.
    • Ideally, there should be a field in the elog that would indicate whether the detector has been power cycled, experienced a radical temperature change, etc. to indicate the reason for another dark run. I can think of a few reasons for a new dark run:
      • Detector power cycled
      • Detector temperature has changed
      • Opportunistic (we're just taking data without data x-rays; might as well make it useful)
      • Image using previous dark run doesn't look right
      • Your reasons here
    • Do we need to specify validity range for a particular dark run?

Requirements for analysis

The average user should be able to easily get a raw and fully corrected image out of an analysis (either psana or ami).
This may require some standardization of ami and psana.

  1. Design a system in which it does not require any specialized knowledge to call up a fully-corrected image.
  • User should not have to worry about the location of calibration data
  • User should not worry about which run they are analyzing.
  • There should be a "golden" default (that changes rarely, and with some oversight?) calibration that applies to each run automatically if the user does not have their own calibrations as part of their analysis.
  1. Design a system in which the user gets the same, or very similar, result in both ami and psana when using default settings.

    To be decided

    Key differences between ami and psana now:

    • AMI does not do a full geometry correction. The detector is mapped onto a rectangular frame – 90° rotations only. Is this good enough?
    • In AMI, unfilled pixels are masked from the user. Is this the case in psana?
      • When do you want these pixels masked and when don't you?
      • How does the user specify whether these pixels are masked or not?
        • In psana?
        • In AMI?
    • In AMI, correction constants are searched in working directory first, then /reg/g/pcds/pds/cspadcalib. Only the most recent calibration is used.
      • Do we want to change this behavior?
      • Can we make it so that the most recent calibration is the default for psana as well?
      • Can/should we make it obvious to the user when they are using something in the working directory vs. the defaults?
  2. Design a system in which the expert user can change the corrections applied by selecting different inputs for the calibrations.
  • AMI has checkboxes that allow the user to select/unselect various corrections.

    To be decided

    • In AMI, checkboxes allow the user to select/unselect various corrections.
    • AMI applies the calibrations/corrections it finds in the current directory first.
    • In psana, the user can generate new pedestals and masks using a psana module.
      • By what method are the AMI pedestals, etc. generated?
      • Can the pedestals generated by the psana module be inspected or does it apply the correction in real time? Does it/can it generate a flat file that can be used by ami?
    • psana needs a trivial way to select new dark runs, etc and new calibration files.
  1. Design a system in which the expert user can compare results of runs with different pedestals, pixel status, and masks

    To be decided

    • Do we need to agree on a flat file format for pedestals, pixel status, and masks so that both ami and psana can read in the same file?
  2. Calibration directory tree should be auto-generated for all devices that need corrections

    To be decided

    • To accomplish this, need to define versioning for the calibration files
    • Who/what automatically generates the calibration files for a particular detector for a particular run?
    • What are the inputs? (Can the inputs be determined automatically or do they require human input?)
    • What might trigger a re-generation of calibration data? (Suppose we learn there was an error in the geometry files or that the dark run was flawed in some way)

Requirements for versioning of calibration data

  1. User needs to be able to re-generate calibration data using different pedestals, masks, geometry files, etc. without destroying what already exists.
  2. Users need to be able to have a private "sandbox" for generating/testing out new calibration sets
  3. There needs to be a single tool for generating and tracking sets of calibration data
  4. Each set of calibrations needs to be uniquely identified.
  5. The user should be able to easily determine what input files were used to generate a set of calibration data. What is the provenance of the data?
  6. There should be a way of comparing the data in different versions. (How do the pedestals compare?)

Common timescales for the calibration constants

  • Dark runs/pedestals can be identified during a specific experiment. Timescales are neatly encapsulated within a run/experiment.
  • Gains can come from a special run that happened weeks ago during a previous experiment.
    • Do we have permissions problems with this data if it was taken by a different experiment? What happens if people decide to regenerate their constants privately and include these special runs and they don't have permission to access them?
    • How is this handled now?
  • Metrology is once every 6 months-ish and goes with the detector. Two kinds of metrology:
    • inside the quad, determined quad is made
    • geometry between the quads
  • Not yet at the point where we care about the common mode correction algorithms.

calib directory

As a reminder, the calib directory looks like this:
calib/ <calib-version>/<source-device>/<type>/<run-range>.data

It resides in each experiment's directory, e.g. /reg/data/ana12/cxi/cxis0113/calib

Validity is determined by the run-range.

<calib-version> - unique name associated with calibration software; file-
group versions are allowed at this level. Needs to be specified...

<source-device> - unique device name, which is a source of data;
available names can be seen % psana --m EventKeys <data-file>.xtc

<type> - calibration type name, for example: pedestals, tilt, center, ...

<run-range> - run validity range, for example: 0-end, 12-34, 56-end, ...

To be decided

How do we associate calibrations with run # and device and keep a history of changes?

  • Should we follow the DAQ configdb model and generate a new file key each time a calibration file is modified?
    • Would it be possible in this scheme to write a comment describing why the calibrations were changed?
  • Should we try using CVS to track everything?
  • Create a MySQL database to track input xtc files and associate groups of input files with a unique file key and a unique set of output files?
  • Your idea here
  • Can we auto-generate a set of calibration files whenever a dark run is taken?
  • Should we have some way for "experts" like the instrument scientists to validate that a particular set of calibration data are the "good" set for an experiment?
  • No labels