Page History
Use cases
Requirements
Requirements for data acquisition and tagging of runs
- Need to be able to mark runs as "dark runs" before and after acquisition
Note title To be decided - DAQ needs to add a checkbox somewhere to indicate that this run is a "dark run", then tag the run with the appropriate "Data Type"
- Need a similar checkbox in the e-log to be able to tag a run a "dark run" after it's been taken.
- Ideally, there should be a field in the elog that would indicate whether the detector has been power cycled, experienced a radical temperature change, etc. to indicate the reason for another dark run. I can think of a few reasons for a new dark run:
- Detector power cycled
- Detector temperature has changed
- Opportunistic (we're just taking data without data x-rays; might as well make it useful)
- Image using previous dark run doesn't look right
- Your reasons here
- Do we need to specify validity range for a particular dark run?
Requirements for analysis
The average user should be able to easily get a raw and fully corrected image out of an analysis (either psana or ami).
This may require some standardization of ami and psana.
...
- AMI has checkboxes that allow the user to select/unselect various corrections.
Note title To be decided - In AMI, checkboxes allow the user is able to select their own /unselect various corrections).
- AMI applies what the calibrations/corrections it finds in the current directory first.
- In psana, the user can generate new pedestals and masks using a psana module.
- By what method are the AMI pedestals, etc. generated?
- Can the pedestals generated by the psana module be inspected or does it apply the correction in real time? Does it/can it generate a flat file that can be used by ami?
- psana needs a trivial way to select new dark runs, etc and new calibration files.
- Design a system in which the expert user can compare results of runs with different pedestals, pixel status, and masks
Note title To be decided - Do we need to agree on a flat file format for pedestals, pixel status, and masks so that both ami and psana can read in the same file?
- Calibration directory tree should be auto-generated for all devices that need corrections
Note title To be decided - To accomplish this, need to define versioning for the calibration files
- Who/what automatically generates the calibration files for a particular detector for a particular run?
- What are the inputs? (Can the inputs be determined automatically or do they require human input?)
- What might trigger a re-generation of calibration data? (Suppose we learn there was an error in the geometry files or that the dark run was flawed in some way)
Requirements for versioning of calibration data
- User needs to be able to re-generate calibration data using different pedestals, masks, geometry files, etc. without destroying what already exists.
- Users need to be able to have a private "sandbox" for generating/testing out new calibration sets
- There needs to be a single tool for generating and tracking sets of calibration data
- Each set of calibrations needs to be uniquely identified, probably using the calib-version field.
- The user should be able to easily determine what input files were used to generate a set of calibration data. What is the provenance of the data?
- There should be a way of comparing the data in different versions. (How do the pedestals compare?)
...
Info | ||
---|---|---|
| ||
As a reminder, the calib directory looks like this: It resides in each experiment's directory, e.g. /reg/data/ana12/cxi/cxis0113/calib Validity is determined by the run-range. <calib-version> - unique name associated with calibration software; file- <source-device> - unique device name, which is a source of data; <type> - calibration type name, for example: pedestals, tilt, center, ... <run-range> - run validity range, for example: 0-end, 12-34, 56-end, ... |
Note | ||
---|---|---|
| ||
Everything. How do we use the calib-versionassociate calibrations with run # and device and keep a history of changes?
|