You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Meeting with XPP 2023

Diling, Vincent, Mona, Valerio, Takahiro, Stefano, Cong, Chris

Dec. 7, 2023

Goal: identify:

  • precise LCLS1 experiments that can be used to test LCLS-II-HE data reduction algorithms
  • identify existing software that does the data analysis
  • identify scientist contacts who can approve the results.

Types of XPP-HE experiments in priority order:

  • time-resolved diffraction
    • signal concentrated in smaller area (like Bragg spots but can be a cluster: spots with satellites.  200x200 or 400x400 pixels, ROI is fixed)
    • look at time-evolution
    • can be a single pixel as a function of time
    • user interested in time-evolution of every pixel in ROI.
    • each event has delay time, delay correction, and I0, beam-position, beam-intensity, and other per-shot machine parameters (ebeam and gasdet BLD)
    • 400x400 ROI would be 8GB per second.
    • readout all detector just so we don't damage the detector, but could throw away data after viewing
    • save whole detector at low rate for determining ROI and then only save ROI at high rates
  • time-resolved diffuse scattering (evolution of radial integration over time)
    • need the whole image
    • currently do cube
      • risky: time-calibration and filtering, I0 (can be done in many different ways) can be error prone 
        • currently XPP gets this right "the first time" (after initial setup)
        • Diling and Vincent are confident that we can make the cube work "the first time" (after tuning)
          • need online visualization (AMI-style)
      • can't afford to do angular integration
        • Vincent writes about the reason for this: The diffuse scattering signal generally does not have cylindrical symmetry, so azimuthal integration is not appropriate for it
        • Diling writes about the reason for this:  Right, the pie slice was an example I raised for Tim’s liquid scattering analysis, generally does not apply to material science.
      • I0 from wave8 or hsd or another area detector
        • DON'T normalize shot-to-shot
        • hypercube: image, I0, time-bins, electric-field bins (voltage, e.g. with wave8) and others (aim for 10000 total bins)
      • All this happens at 25kHz (not 1MHz)
    • another option: angular integration ("pie slices") (wasn't preferred by XPP scientists? Silke is surprised they didn't like this approach, maybe depends on physics?)
    • 4Mpx*1000=16GB (float32 data type)
    • binning: need the piranha time tool calibrated edge, and need coarse timing per shot: delay-stage encoder
      • hope you could use same solution as RIX: interpolated absolute encoder (100Hz) or axilon MHz relative encoder (renishaw?)
    • to get error bars may need to store a second cube with image-sum-of-squares for each time-bin (also integrated over shots)
  • peakfinding for "speckle visibility spectroscopy"
    • "speckles" 
    • low-intensity XPCS where droplets (synonymous with peak-finding?) are used
    • talk to Yanwen/Vincent to get a high-occupancy XPP/XCS dataset (a low-intensity XPCS where droplets are used).
    • eventually using sparkpix photon-assignment: either 0 (throw away) or photon locations
      • getting I,j,value from sparkpix
      • need to tune sparkpix "thresholds" first
    • occupancy is 1% or less, implies 2GB/s with 4Mpx 25kHz sparkpix
    • can be done with epixUHR or sparkpix, so we need software photon-finding for epixUHR
      • photon finding: threshold, find droplet
    • need to "count photons" within each peak (which pixels have which photons)
      • this could be done as a second step offline?
      • could be done as one step in Cong's neural net ("hydranet")
    • Yanwen writes: "an example will be run 622, experiment xppx49520. most runs in xppx49520 are usable."
    • Analysis code appears to be here: /cds/data/psdm/xpp/xppx49520/scratch/ffb/smalldata_tools/
  • auto-correlation (XPCS within image)
    • save an ROI after an auto-correlation (i.e. calibrated image)
    • low priority
    • sparse images
    • complexity: no single computer sees the whole detector.  a big problem
      • need to try libSZ or peak finding?
    • could do it at high intensity

Analysis Meeting Dec. 2023

Dec. 18, 2023

Yanwen, Vincent, Valerio, Cong, Stefano, Fred, cpo

  • To learn how to run the analysis scripts which we think are in /cds/data/psdm/xpp/xppx49520/scratch/ffb/smalldata_tools/
  • Yanwen writes: "an example will be run 622, experiment xppx49520. most runs in xppx49520 are usable."
  • Another similar expt xpplx9221 in s3df
  • look at smd droplet code in ARP
  • line 278
  • get_droplet_params: old psana is in ADU, RMS is 3 (0.15keV) use 5 times that for threshold
  • don't need the precise geometry of the four detectors
  • 4 epix100 detectors
  • used to read ADUs but now psana does keV.  threshold in keV is ~9.9?
  • XPCS data
  • put the detector is very far back
  • each detector covers a very small solid-angle, so all pixels about "the same"
    • sometimes you have zoom in to an ROI so all pixels look the same
  • threshold is critical
  • bad pixels done by psana.  mask is used to get rid of high-intensity regions
  • each pixel should show up equally: if a pixel "stands out" with too many hits mask it out as a hot pixel
  • also need to mask out cosmic rays, and radiation background from trace elements concrete (at higher energies).  could leave to a second offline stage
  • pixels with connecting borders form a droplet.  don't use scipy.label, not sure why.
  • there is a fifth detector, but has too many photons?
  • from droplet, assign a number of photons
  • use "greedy guess" for assigning photons
  • different algorithms have different biases, have to "calibrate" the bias
    • two main ways to calibration:
      • find a speckle pattern with known contrast, use unfocused beam (100s micron, vs usual 1 micron).  Use that to measure bias.
      • second way cannot be done per-frame.  measuring a change on picosecond timescale.  measure a sequence of speckle patterns: not related.  turns out adding two subsequent frames (long timescale) halves the "contrast-beta".  Have to find two frames of similar intensity to add.  Need to add them together before the photonization.  can't data-reduce them.
      • bias changes as function of temperature but other than that it's pretty constant: a characteristic of the algorithm and the detector.  depends on how the charge-cloud size compares to the pixel size of the detector
      • also have simulated data where ground truth is known
    • would like to label calibration runs like dark runs
  • everything up until now everything is more generally interesting: not just XPCS 
  • goal: get the contrast Beta from the ratio of 1-photon and 2-photon droplets
    • some corrections from the pulse-to-pulse intensity using I0 measurement (e.g. SASI pulse intensity)
    • can defer I0 correction to offline (not drp)
  • photon occupancy is 10^-4 (per pixel) for XPCS.  XES is larger.  Also need 2-photon events to get beta.
    • droplet might be enough (don't need photonizing?)
    • need the location of each pixel
    • save i,j,intensity (don't need the droplet-label, can be computed from i,j)
  • get one number for a contrast (beta) compare to different samples under different conditions.
    • need 0.5 million frames (~ 1 hour of data taking 
  • beta is -0.038 +- .007.
  • want to see a "trend" in beta as a function of tau (separation of two pulses)
    • can also look as a function of Q
    • get tau from accelerator: doesn't vary shot to shot or from path-length change of a mirror.
  • watch for count-rate dependence
    • bin according to different intensity and measure beta

Meeting with Diling 2021

Nov. 12, 2021 and Nov. 18, 2021

Detectors

Nov. 12, 2021


2 hutch wave8's
1 user wave8
minimum 4 ideally 8 hsd channels 4 cards
can we run 4 channels at 3.2GHz? yes
10kHz full hsd wf's
worried about ringing signal peaks @1MHz (answer: set number of samples)
1 epix10khr 2M
4 or 6 small sparkpix 50um px (thresholding per pixel, 1% occupancy) or few epix10khr
low rate (kHz for small roi) 1Mpixel optical (opal/zyla replacement)
noise performance target: https://axiomoptics.com/low-light-cameras/orca-quest-qcmos-camera/?gclid=CjwKCAiAvriMBhAuEiwA8Cs5lVZr2oRyK4IeQ2DmeYRPhVgmb45QZLiJkxzDK_cBFH9Q5asWHDQ7HBoCo5MQAvD_BwE
piranha camera for timetool, maybe don't need shot to shot?  1kHz? low rate problem for MPI.

Slack question on running hsd's at higher rate than area detectors:

To make sure I’m clear: you might run hsd’s at 1MHz?
Diling Zhu  5:41 PM
if the beam rate is 1MHz, i can definitely think of scenarios, assuminging
area detectors can integrate

Data Reduction

Nov. 18, 2021

hope for pump-probe timing stable so we can average shots in the drp (1000)

expt types:
liquid chemistry
- normalize image using the beamline i0 (wave8, or small epix, or a big epix)
- cube (with binned images or ROI) potentially with 2Mpx by 100 or 200 bins
- bin w.r.t pump-probe from timetool, but could also have 2 laser pumps, and
  photon-energy (hypercube)
- optional image processing before binning: pie-slicing (S(q) traces), droplet,
  projection to get spectrum
  with thresholding (better signal to noise), fit photon positions
- only filter on mono using wave8 and beam position (50% now, less in future)
- no per-shot reading of mono position
- less interested in redundant pie-slicing and cube than tim van driel (more
  spectroscopy, less scattering since xpp is monochromatic (lower flux)).
- 5kHz 2026 25kHz 2028
- if time stability is good lose the need for rate: write out summed images at
  lower rate: kHz to 10's of hz.
  o feedback from timetool could help from that
- if hypercube gets too many bins can make turn it into a 1D image with angular
  integrations

emission spectroscopy:
- projection to get spectrum (perhaps not along a line: parabola?)

users need a hook to do other image processing (e.g. for "visible spectroscopy")


  • No labels