Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The white light, now "imprinted" with the x-ray pulse, is dispersed by a diffraction grating onto a camera. Here's what the raw signal looks like, typically:

 


If you take a projection of this image along the y-axis, you obtain a trace similar to this:

...

Info
titleWhere do the filter weights come from?
From Matt Weaver 8/23/17: (including extra thoughts transcribed by cpo from Matt on 02/26/20)

The spectrum ratio is treated as a waveform, and that waveform is analyzed with a Wiener filter in the "time" domain.  It is actually a matched filter where the noise is characterized by taking the ratio of non-exposed spectra from different images - should be flat but the instability of the reference optical spectrum causes it to have correlated movements.  So, the procedure for producing those matched filter weights goes like this:

  1. Collect several non-xray exposed spectra ("dropped shots" or "event code 162")
  2. Define an ROI (e.g. 50 pixels) around the signal region. use this ROI for all calculations below, although it has to move to follow the location when the average signal is computed (step 5).
  3. for each background shot divide by the background averaged over many shots ("normalize")
  4. Matt says it's not so scientific how one obtains this "averaged signal": one could find several events, center the edge in a window, and then take an average of as many events as it takes for it to "look good". Or just take one event and smooth out the background somehow. He did the latter when he computed the weights.
  5. for each background shot divide by the background averaged over many shots ("normalize")
  6. calculate the auto-correlation function ("calculate the auto-correlation function ("acf") of those normalized waveforms. One correlates each sample with each sample n-away in the same waveform creating an array of the same size as the input waveform. The acf is the average of all those products.
  7. Collect your best averaged ratio (signal divided by averaged background) with signal; this averaging smooths out any non-physics features (etalon effect won't average out)

There's a script (https://github.com/lcls-psana/TimeTool/blob/master/data/timetool_setup.py) which shows the last steps of the calibration process that produces the matched filter weights from the autocorrelation function and the signal waveform.  That script has the auto-correlation function and averaged-signal hard-coded into it, but it shows the procedure.  It requires some manual intervention to get a sensible answer, since there are often undesirable features in the signal that the algorithm picks up and tries to optimize towards. The fundamental formula in that script is weights=scipy.linalg.inv(scipy.linalg.toeplitz(acf))*(averaged_signal).

The above procedure optimizes the filter to reject the background. Matt doesn't currently remember a physical picture of why the "toeplitz" formula optimizes the weights to reject background. If one wants to simplify by ignoring the background suppression optimization, the "average signal" (ignoring the background) can also be used as a set of weights for np.convolve.

...

Results can be accessed using the following epics-variables names:

TTSPEC:FLTPOSthe position of the edge, in pixels
TTSPEC:AMPLamplitude of biggest edge found by the filter
TTSPEC:FLTPOSFWHMthe FWHM of the edge, in pixels
TTSPEC:AMPLNXTamplitude of second-biggest edge found by the filter, good for rejecting bad fits
TTSPEC:REFAMPLamplitude of the background "reference" which is subtracted before running the filter algorithm, good for rejecting bad fits
TTSPEC:FLTPOS_PSthe position of the edge, in picoseconds, but requires correct calibration constants be put in the DAQ when data is acquired. Few people do this. So be wary.

These are usually pre-pended by the hutch, so e.g. at XPP they will be "XPP:TTSPEC:FLTPOS". The TT camera (an OPAL) should also be in the datastream.

...

Code Block
languagepy
titleConversion of FLTPOS into PS delay
def relative_time(edge_position):
    """
    Translate edge position into fs (for cxij8816)
    
    from docs >> fs_result = a + b*x + c*x^2, x is edge position
    """
    a = -0.0013584927458976459
    b =  3.1264188429430901e-06
    c = -1.1172611228659911e-09
    x = tt_pos(evt)
    tt_correction = a + b*x + c*x**2
    time_delay = las_stg(evt)
    return -1000*(time_delay + tt_correction)

...


Rolling Your Own

Hopefully you now understand how the timetool works, how the DAQ analysis works, and how to access and validate those results. If your DAQ results look unacceptable for some reason, you can try to re-process the timetool signal. If, right now, you are thinking "I need to do that!", you have a general idea of how to go about it. If you need further help, get in touch with the LCLS data analysis group. In general we'd be curious to hear about situations where the DAQ processing does not work and needs improvement.

...

  1. It is possible to re-run the DAQ algorithm offline in psana, with e.g. different filter weights or other settings. This is documented extensively.
  2. There is some experimental python code for use in situations where the etalon signal is very strong and screws up the analysis. It also simply re-implements a version of the DAQ analysis in python, rather than C++, which may be easier to customize. This is under active development and should be considered use-at-your-own-risk. Get in touch with TJ Lane <tjlane@slac.stanford.edu> if you think this would be useful for you.

 

References

https://opg.optica.org/oe/fulltext.cfm?uri=oe-19-22-21855&id=223755

https://www.nature.com/articles/nphoton.2013.11

https://opg.optica.org/oe/fulltext.cfm?uri=oe-28-16-23545&id=433772