Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • To learn how to run the analysis scripts which we think are in /cds/data/psdm/xpp/xppx49520/scratch/ffb/smalldata_tools/
  • Yanwen writes: "an example will be run 622, experiment xppx49520. most runs in xppx49520 are usable."
  • Another similar expt xpplx9221 in s3df
  • look at smd droplet code in ARP
  • line 278
  • get_droplet_params: old psana is in ADU, RMS is 3 (0.15keV) use 5 times that for threshold
  • don't need the precise geometry of the four detectors
  • 4 epix100 detectors
  • used to read ADUs but now psana does keV.  threshold in keV is ~9.9?
  • XPCS data
  • put the detector is very far back
  • each detector covers a very small solid-angle, so all pixels about "the same"
    • sometimes you have zoom in to an ROI so all pixels look the same
  • threshold is critical
  • bad pixels done by psana.  mask is used to get rid of high-intensity regions
  • each pixel should show up equally: if a pixel "stands out" with too many hits mask it out as a hot pixel
  • also need to mask out cosmic rays, and radiation background from trace elements concrete (at higher energies).  could leave to a second offline stage
  • pixels with connecting borders form a droplet.  don't use scipy.label, not sure why.
  • there is a fifth detector, but has too many photons?
  • from droplet, assign a number of photons
  • use "greedy guess" for assigning photons
  • different algorithms have different biases, have to "calibrate" the bias
    • two main ways to calibration:
      • find a speckle pattern with known contrast, use unfocused beam (100s micron, vs usual 1 micron).  Use that to measure bias.
      • second way cannot be done per-frame.  measuring a change on picosecond timescale.  measure a sequence of speckle patterns: not related.  turns out adding two subsequent frames (long timescale) halves the "contrast-beta".  Have to find two frames of similar intensity to add.  Need to add them together before the photonization.  can't data-reduce them.
      • bias changes as function of temperature but other than that it's pretty constant: a characteristic of the algorithm and the detector.  depends on how the charge-cloud size compares to the pixel size of the detector
      • also have simulated data where ground truth is known
    • would like to label calibration runs like dark runs
  • everything up until now everything is more generally interesting: not just XPCS 
  • goal: get the contrast Beta from the ratio of 1-photon and 2-photon droplets
    • some corrections from the pulse-to-pulse intensity using I0 measurement (e.g. SASI pulse intensity)
    • can defer I0 correction to offline (not drp)
  • photon occupancy is 10^-4 (per pixel) for XPCS.  XES is larger.  Also need 2-photon events to get beta.
    • droplet might be enough (don't need photonizing?)
    • need the location of each pixel
    • save i,j,intensity (don't need the droplet-label, can be computed from i,j)
  • get one number for a contrast (beta) compare to different samples under different conditions.
    • need 0.5 million frames (~ 1 hour of data taking 
  • beta is -0.038 +- .007.
  • want to see a "trend" in beta as a function of tau (separation of two pulses)
    • can also look as a function of Q
    • get tau from accelerator: doesn't vary shot to shot or from path-length change of a mirror.
  • watch for count-rate dependence
    • bin according to different intensity and measure beta

Two main tasks:

  • implement two-threshold dropletizing algorithm for drp (including data-format for I,j,intensity for a multi-panel detector)
    • gpu's could still help with the thresholding in drp
    • need to calibrate the data
  • how much gpu/cpu resources do we need to do 4Mpx 25kHz offline analysis with the above reduced data?
    • includes photonizing, rejection of cosmic rays and other background radiation
    • code is "loopy" python, Silke has version with less loops
    • greedy guess algorithm assumes photon is in 2 pixels (like Chuck's idea for 2 pixel photons)
      • a little different in how it handles the last small pieces of photons 
      • avoids time-consuming gaussian fits: maybe neural nets could do this faster?
    • could consider neural net

Meeting with Diling 2021

Nov. 12, 2021 and Nov. 18, 2021

...