You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

2016-11-22 email from Dan
Daniel Jeremy Higley <dhigley@stanford.edu>   

Tue 11/22/2016 11:02 AM
To:   Dubrovin, Mikhail;  
Cc:   Dakovski, Georgi L.;  

Hi Mikhail,

Thanks for this info. I cc'ed
 Georgi who is very involved in measurements which use these kinds of 
analyses and might have more useful input.

The model you outlined sounds
 great! I would implement the tool I outlined in section 1 of the 
document I sent, "monitoring data accuracy for experimental 
optimization" in this. This is a pretty generic thing to do,
and does not require customization for different experiments. The tool I
 described in section 2, "monitoring data meaning to guide experiments",
 is a bit more experiment specific, and may require modifying AMI a bit 
or users to modify some psana shared memory
code. For example, we sort the data into 4 different cases according to 
whether a pump laser is on or off and the direction of an applied 
magnetic field in the example plot of the document I sent. Different 
experiments may want to sort on different things.

Within your model, I really 
like the idea of being able to switch between a configuration parameters
 tab and a plots tab. It would also be great if we could open a few of 
these monitoring tools at once, in case we
want to monitor the behavior of more than two detectors.

For the monitoring for measurement accuracy tool, the configuration parameters would be as follows:
General configuration parameters:
  - Experiment and run, or shared memory (as you mentioned)
  - Number of shots in update: The plots would update every time the diagnostic gets this many shots.
  - Percentiles of pulse 
energies of shots to keep when calculating histogram/SNR statistic (low,
 high). I typically discard the lowest ~30 percent of shots and highest 
~3-5 percent of shots when calculating these.
The lowest shots are discarded because they may have a higher relative 
dark noise to signal ratio. The highest shots are discarded because a 
detector may behave nonlinearly at such high intensities.

Then, one would also need to 
input parameters for the two detectors to compare. There should be a 
choice of CCD, Acqiris, or "GMD" for each of these. , then the following
 parameters would be needed, depending on what
each detector is.

For CCD:
  - Name of CCD (e.g. "andor")
  - "Signal" ROI: (top, bottom, left, right)
  - "Dark" ROI: (top, bottom, left, right)

One can then get the dark-corrected signal by subtracting the average dark value from the signal value.

For Acqiris:
  - Name of Acqiris (e.g., "Acq01", "Acq02")
  - Channel number of Acqirs (e.g., 1, 2, 3)
  - "Signal" part of Acqiris trace (start, end)
  - "Dark" part of Acqiris trace (start, end)


For GMD:
  One doesn't need any 
additional configuration parameters. Just use the following psana python
 (or similar) to get the GMD value for each event:

gmd = psana.Detector('GMD')
# Get a psana event "evt" here
gmd_data = gmd.get(evt)
gmd_value = gmd_data.relativeEnergyPerPulse()

It would be nice if one can 
input these parameters on a separate tab from the plots and the plots 
would then update accordingly. In a more complex, but nicer case, one 
could also look at an image/plot of the CCD/acqiris
trace and select the ROIs graphically, but I'm guessing this may take 
significantly more effort, and we can figure out these ROIs using the 
AMI anyway.

I would set all configuration
 parameters to be persistent and loaded/saved in login home. I don't 
think the visibility of the configuration file is particularly 
important.

Best,
Dan

References

  • No labels