Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • time-resolved diffraction
    • signal concentrated in smaller area (like Bragg spots but can be a cluster: spots with satellites.  200x200 or 400x400 pixels, ROI is fixed)
    • look at time-evolution
    • can be a single pixel as a function of time
    • user interested in time-evolution of every pixel in ROI.
    • each event has delay time, delay correction, and I0, beam-position, beam-intensity, and other per-shot machine parameters (ebeam and gasdet BLD)
    • 400x400 ROI would be 8GB per second.
    • readout all detector just so we don't damage the detector, but could throw away data after viewing
    • save whole detector at low rate for determining ROI and then only save ROI at high rates
  • time-resolved diffuse scattering (evolution of radial integration over time)
    • need the whole image
    • currently do cube
      • risky: time-calibration and filtering, I0 (can be done in many different ways) can be error prone 
        • currently XPP gets this right "the first time" (after initial setup)
        • Diling and Vincent are confident that we can make the cube work "the first time" (after tuning)
          • need online visualization (AMI-style)
      • can't afford to do angular integration
        • Vincent writes about the reason for this: The diffuse scattering signal generally does not have cylindrical symmetry, so azimuthal integration is not appropriate for it
        • Diling writes about the reason for this:  Right, the pie slice was an example I raised for Tim’s liquid scattering analysis, generally does not apply to material science.
      • I0 from wave8 or hsd or another area detector
        • DON'T normalize shot-to-shot
        • hypercube: image, I0, time-bins, electric-field bins (voltage, e.g. with wave8) and others (aim for 10000 total bins)
      • All this happens at 25kHz (not 1MHz)
    • another option: angular integration ("pie slices") (wasn't preferred by XPP scientists? Silke is surprised they didn't like this approach, maybe depends on physics?)
    • 4Mpx*1000=16GB (float32 data type)
    • binning: need the piranha time tool calibrated edge, and need coarse timing per shot: delay-stage encoder
      • hope you could use same solution as RIX: interpolated absolute encoder (100Hz) or axilon MHz relative encoder (renishaw?)
    • to get error bars may need to store a second cube with image-sum-of-squares for each time-bin (also integrated over shots)
  • peakfinding for "speckle visibility spectroscopy"
    • "speckles" 
    • low-intensity XPCS where droplets (synonymous with peak-finding?) are used
    • talk to Yanwen/Vincent to get a high-occupancy XPP/XCS dataset (a low-intensity XPCS where droplets are used).
    • eventually using sparkpix photon-assignment: either 0 (throw away) or photon locations
      • getting I,j,value from sparkpix
      • need to tune sparkpix "thresholds" first
    • occupancy is 1% or less, implies 2GB/s with 4Mpx 25kHz sparkpix
    • can be done with epixUHR or sparkpix, so we need software photon-finding for epixUHR
      • photon finding: threshold, find droplet
    • need to "count photons" within each peak (which pixels have which photons)
      • this could be done as a second step offline?
      • could be done as one step in Cong's neural net ("hydranet")
    • Yanwen writes: "an example will be run 622, experiment xppx49520. most runs in xppx49520 are usable."
    • Analysis code appears to be here: /cds/data/psdm/xpp/xppx49520/scratch/ffb/smalldata_tools/
  • auto-correlation (XPCS within image)
    • save an ROI after an auto-correlation (i.e. calibrated image)
    • low priority
    • sparse images
    • complexity: no single computer sees the whole detector.  a big problem
      • need to try libSZ or peak finding?
    • could do it at high intensity

Analysis Meeting Dec. 2023

Dec. 18, 2023

Yanwen, Vincent, Valerio, Cong, Stefano, Fred, cpo

  • To learn how to run the analysis scripts which we think are in /cds/data/psdm/xpp/xppx49520/scratch/ffb/smalldata_tools/
  • Yanwen writes: "an example will be run 622, experiment xppx49520. most runs in xppx49520 are usable."
  • Another similar expt xpplx9221 in s3df
  • look at smd droplet code in ARP
  • line 278
  • Image Added
  • get_droplet_params: old psana is in ADU, RMS is 3 (0.15keV) use 5 times that for threshold
  • Image Added
  • don't need the precise geometry of the four detectors
  • 4 epix100 detectors
  • used to read ADUs but now psana does keV.  threshold in keV is ~9.9?
  • XPCS data
  • put the detector is very far back
  • each detector covers a very small solid-angle, so all pixels about "the same"
    • sometimes you have zoom in to an ROI so all pixels look the same
  • threshold is critical
  • bad pixels done by psana.  mask is used to get rid of high-intensity regions
  • each pixel should show up equally: if a pixel "stands out" with too many hits mask it out as a hot pixel
  • also need to mask out cosmic rays, and radiation background from trace elements concrete (at higher energies).  could leave to a second offline stage
  • pixels with connecting borders form a droplet.  don't use scipy.label, not sure why.
  • there is a fifth detector, but has too many photons?
  • from droplet, assign a number of photons
  • use "greedy guess" for assigning photons
  • different algorithms have different biases, have to "calibrate" the bias
    • two main ways to calibration:
      • find a speckle pattern with known contrast, use unfocused beam (100s micron, vs usual 1 micron).  Use that to measure bias.
      • second way cannot be done per-frame.  measuring a change on picosecond timescale.  measure a sequence of speckle patterns: not related.  turns out adding two subsequent frames (long timescale) halves the "contrast-beta".  Have to find two frames of similar intensity to add.  Need to add them together before the photonization.  can't data-reduce them.
      • bias changes as function of temperature but other than that it's pretty constant: a characteristic of the algorithm and the detector.  depends on how the charge-cloud size compares to the pixel size of the detector
      • also have simulated data where ground truth is known
    • would like to label calibration runs like dark runs
  • everything up until now everything is more generally interesting: not just XPCS 
  • goal: get the contrast Beta from the ratio of 1-photon and 2-photon droplets
    • some corrections from the pulse-to-pulse intensity using I0 measurement (e.g. SASI pulse intensity)
    • can defer I0 correction to offline (not drp)
  • photon occupancy is 10^-4 (per pixel) for XPCS.  XES is larger.  Also need 2-photon events to get beta.
    • droplet might be enough (don't need photonizing?)
    • need the location of each pixel
    • save i,j,intensity (don't need the droplet-label, can be computed from i,j)
  • get one number for a contrast (beta) compare to different samples under different conditions.
    • need 0.5 million frames (~ 1 hour of data taking 
  • beta is -0.038 +- .007.
  • want to see a "trend" in beta as a function of tau (separation of two pulses)
    • can also look as a function of Q
    • get tau from accelerator: doesn't vary shot to shot or from path-length change of a mirror.
  • watch for count-rate dependence
    • bin according to different intensity and measure beta

Meeting with Diling 2021

Nov. 12, 2021 and Nov. 18, 2021

...