Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Use cases

List of use cases

Requirements

Requirements for data acquisition and tagging of runs

  1. Need to be able to mark runs as "dark runs" before and after acquisition
    Note
    titleTo be decided
    • DAQ needs to add a checkbox somewhere to indicate that this run is a "dark run", then tag the run with the appropriate "Data Type"
    • Need a similar checkbox in the e-log to be able to tag a run a "dark run" after it's been taken.
    • Ideally, there should be a field in the elog that would indicate whether the detector has been power cycled, experienced a radical temperature change, etc. to indicate the reason for another dark run. I can think of a few reasons for a new dark run:
      • Detector power cycled
      • Detector temperature has changed
      • Opportunistic (we're just taking data without data x-rays; might as well make it useful)
      • Image using previous dark run doesn't look right
      • Your reasons here
    • Do we need to specify validity range for a particular dark run?

Requirements for analysis

The average user should be able to easily get a raw and fully corrected image out of an analysis (either psana or ami).
This may require some standardization of ami and psana.

...

  • AMI has checkboxes that allow the user to select/unselect various corrections.
    Note
    titleTo be decided
    • In AMI, checkboxes allow the user is able to select their own /unselect various corrections).
    • AMI applies what the calibrations/corrections it finds in the current directory first.
    • In psana, the user can generate new pedestals and masks using a psana module.
      • By what method are the AMI pedestals, etc. generated?
      • Can the pedestals generated by the psana module be inspected or does it apply the correction in real time? Does it/can it generate a flat file that can be used by ami?
    • psana needs a trivial way to select new dark runs, etc and new calibration files.
  1. Design a system in which the expert user can compare results of runs with different pedestals, pixel status, and masks
    Note
    titleTo be decided
    • Do we need to agree on a flat file format for pedestals, pixel status, and masks so that both ami and psana can read in the same file?
  2. Calibration directory tree should be auto-generated for all devices that need corrections
    Note
    titleTo be decided
    • To accomplish this, need to define versioning for the calibration files
    • Who/what automatically generates the calibration files for a particular detector for a particular run?
    • What are the inputs? (Can the inputs be determined automatically or do they require human input?)
    • What might trigger a re-generation of calibration data? (Suppose we learn there was an error in the geometry files or that the dark run was flawed in some way)

Requirements for versioning of calibration data

  1. User needs to be able to re-generate calibration data using different pedestals, masks, geometry files, etc. without destroying what already exists.
  2. Users need to be able to have a private "sandbox" for generating/testing out new calibration sets
  3. There needs to be a single tool for generating and tracking sets of calibration data
  4. Each set of calibrations needs to be uniquely identified, probably using the calib-version field.
  5. The user should be able to easily determine what input files were used to generate a set of calibration data. What is the provenance of the data?
  6. There should be a way of comparing the data in different versions. (How do the pedestals compare?)

...

Info
titlecalib directory

As a reminder, the calib directory looks like this:
calib/ <calib-version>/<source-device>/<type>/<run-range>.data

It resides in each experiment's directory, e.g. /reg/data/ana12/cxi/cxis0113/calib

Validity is determined by the run-range.

<calib-version> - unique name associated with calibration software; file-
group versions are allowed at this level. Needs to be specified...

<source-device> - unique device name, which is a source of data;
available names can be seen % psana --m EventKeys <data-file>.xtc

<type> - calibration type name, for example: pedestals, tilt, center, ...

<run-range> - run validity range, for example: 0-end, 12-34, 56-end, ...

Note
titleTo be decided

Everything. How do we use the calib-versionassociate calibrations with run # and device and keep a history of changes?

  • Should we follow the DAQ configdb model and generate a new file key each time a calibration file is modified?
    • Would it be possible in this scheme to write a comment describing why the calibrations were changed?
  • Should we try using CVS to track everything?
  • Create a MySQL database to track input xtc files and associate groups of input files with a unique file key and a unique set of output files?
  • Your idea here
  • Can we auto-generate a set of calibration files whenever a dark run is taken?
  • Should we have some way for "experts" like the instrument scientists to validate that a particular set of calibration data are the "good" set for an experiment?