You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Timing Measurements

(courtesy of Alvaro Sanchez-Gonzalez)

I just timed the three algorithms on the function that is in charge of splitting the bunches using the three algorithms. I got the following numbers in seconds to split a single image (based on the average of doing it 20 times):

  • scipyLabel: 0.057 s
  • autothreshold: 0.31 s (with 25 iterations)
  • autothreshold: 0.17 s (with 16 iterations, which should be enough)
  • contourLabel: 1.18 s

These tests have been done applying a snrfilter of 3. This is quite critical as the time taken by the labeling function and the island size check itself depends a lot on the number of islands, and the weaker the snr filter the more remaining noise islands are there. If you run the same for a value of snrfilter=5 you get the following times:

  • scipyLabel: 0.015 s
  • autothreshold: 0.24 s (with 25 iterations)
  • autothreshold: 0.17 s (with 16 iterations, which should be enough)
  • contourLabel: 0.54 s

The countourlabel is the one which improves most. This is due to the fact that most of the iterations to split the islands are run for a low threshold, which increases in small steps, so it can benefit the most from not having islands made out of noise. However in this case you would be throwing away more of the image information since the beginning, which is kind of the opposite we wanted to do by following the contours.

Algorithm Steps

(courtesy of Mihir Mongia)

We believe XTCAVRetrieval.SetCurrentEvent(evt) calls the following three "steps":

Routine ProcessShotStep1:

  • overall goal: subtract dark, denoise, signal ROI, bunch split
  • subtract dark image
  • run denoting algorithm median filter
  • median filter (for smoothing):
    • looks at pixel and its neighbors (number can be specified in some manner not yet understood)
    • take median of that set and set the center pixel value to the median
  • look at noise region
  • subtract mean of noise from whole image
  • anything >10 (can be over-ridden) standard deviation of noise, keep it.  < 10 stddevs set it to zero.
  • normalize image so sum=1
  • assumes dark image is larger than the shot image ROI (xmin,xmax,ymin,ymax probably coming from EPICS)
  • looks at max value in normalized image
  • takes all pixels>0.2*max then you "stay" in the ROI.  the software "draws a rectangle" around these pixel to keep ROI rectangular
  • expands rectangle dimensions by 2.5 (from the center) (user-settable) to bring in all interesting pixels into ROI, with a high likelihood
  • calls splitimage (says 'not done' in code?) to handle the bunches
    • this calls IslandSplitting which calls scipy.measurement.label on a boolean image, where the threshold for computing the boolean is zero (this is not really Otsu's method, we believe)
  • splitimage returns a 3D array where first dimension is the bunch (i.e. a set of rectangular ROIs)

ProcessShotStep2

  • overall goal: convert x to time, and y to energy (calibration, probably use EPICS variables)

ProcessShotStep3

  • overall goal: calculate power profile (power, time arrays)
  • calculate center-of-mass vector (1 number per time bin) and ERMS (energy RMS).  this may be related to the current-projection.
  • take the lasing-on image project onto time axis to get the current, and normalize
  • loop through the lasing-off images and do dot-products to find the most similar
  • subtract the lasing-on center-of-mass vector from lasing-off vector to get the power, and similarly for the sigma method (although for some reason the sign of the subtraction is opposite).
  • things not understood: bunch delay
  • calculate the power profile (delta and sigma methods)

Then Call XRayPower

  • averages the delta/sigma results (not weighted)
  • No labels