Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The white light, now "imprinted" with the x-ray pulse, is dispersed by a diffraction grating onto a camera. Here's what the raw signal looks like, typically:

 


If you take a projection of this image along the y-axis, you obtain a trace similar to this:

...

The FWHM of the convolution and it's amplitude are good indicators of how well this worked, and are also reported.

The astute reader will notice that this trace has no etalon wiggles. That is because it has been cleaned up by subtracting an x-ray off shot (BYKICK). Those events have the same etalon effect, but no edge – subtracting them removes the etalon. That's a good thing, because the etalon wiggles would have given this method a little trouble if they were big in amplitude.

So, to summarize, here is what the DAQ analysis code does:

  1. Extract the TT trace using an ROI + projection
  2. Subtract the most recent x-ray off (BYKICK) trace
  3. Convolve the trace with a filter
  4. Report the position, amplitude, FWHM of the resulting peak

The reported position will be in PIXELS. That is not so useful! To convert to femtoseconds, we need to calibrate the TT.

Info
titleImportant Note on How to Compute Time Delays

The timetool only gives you a small correction to the laser-xray delay. The "nominal" delay is set by the laser, which is phase locked to the x-rays. The timetool measures jitter around that nominal delay. So you should compute the final delay as:

delay = nominal_delay + timetool_correction

Since different people have different conventions about the sign that corresponds to "pump early" vs. "pump late" you must exercise caution that you are doing the right thing here. Ensure that things are what you think they are. If possible, figure this out before your experiment begins, or early in running, and write it down. Force everyone to use the same conventions. Especially if you are on night shift :).

The "nominal delay" should be easily accessible as a PV. Unfortunately, it will vary hutch-to-hutch. Sorry. Ask your beamline scientist or PCDS PoC for help.

How to get the results

Results can be accessed using the following epics-variables names:

TTSPEC:FLTPOSthe position of the edge, in pixels
TTSPEC:AMPLamplitude of biggest edge found by the filter
TTSPEC:FLTPOSFWHMthe FWHM of the edge, in pixels
TTSPEC:AMPLNXTamplitude of second-biggest edge found by the filter, good for rejecting bad fits
TTSPEC:REFAMPLamplitude of the background "reference" which is subtracted before running the filter algorithm, good for rejecting bad fits
TTSPEC:FLTPOS_PSthe position of the edge, in picoseconds, but requires correct calibration constants be put in the DAQ when data is acquired. Few people do this. So be wary.

These are usually pre-pended by the hutch, so e.g. at XPP they will be "XPP:TTSPEC:FLTPOS". The TT camera (an OPAL) should also be in the datastream.

Here is an example of how to get these values from psana

Code Block
languagepy
firstline1
titleAccessing TT Data in psana (python)
linenumberstrue
import psana

ds = psana.MPIDataSource('exp=cxij8816:run=221')

evr       = psana.Detector('NoDetector.0:Evr.0')
tt_camera = psana.Detector('Timetool')
tt_pos    = psana.Detector('CXI:TTSPEC:FLTPOS')
tt_amp    = psana.Detector('CXI:TTSPEC:AMPL')
tt_fwhm   = psana.Detector('CXI:TTSPEC:FLTPOSFWHM')

for i,evt in enumerate(ds.events()):
    tt_img = tt_camera.raw(evt)
	# <...>

How to know it did the right thing

Cool! OK, so how do you now if it worked? Here are a number of things to check:

  1. Plot some TT traces with the edge position on top of them, and make sure the edge found is reasonable.
  2. Look at a "jitter histogram", that is the distribution of edge positions found, for a set time delay. The distribution should be approximately Gaussian. Not perfectly. But it should not be multi-modal.
  3. Do pairwise correlation plots between TTSPEC:FLTPOS / TTSPEC:AMPL / TTSPEC:FLTPOSFWHM, and ensure that you don't see anything "weird". They should form nice blobs – maybe not perfectly Gaussian, but without big outliers, periodic behavior, or "streaks".
  4. Analyze a calibration run, where you change the delay in a known fashion, and make sure your analysis matches that known delay (see next section).
  5. The gold standard: do your full data analysis and look for weirdness. Physics can be quite sensitive to issues! But it can often be difficult to trouble shoot this way, as many things could have caused your experiment to screw up.

Outliers in the timetool analysis are common, and most people typically throw away shots that fall outside obvious "safe" regions. That is totally normal, and typically will not skew your results.

Tip
titleFiltering TT Data

If you make a plot of the measured timetool delay it is quite noisy (e.g. the laser delay vs. the measured filter position, TTSPEC:FLTPOS, while keeping the TT stage stationary). There are many outliers. This can be greatly cleaned up by filtering based on the TT peak.

I (TJ, <tjlane@slac.stanford.edu>) have found the following "vetos" on the data to work well, though they are quite conservative (throw away a decent amount of data). That said they should greatly clean up the TT response and has let me proceed in a "blind" fashion:

Require:

tt_amp [TTSPEC:AMPL] > 0.05 AND

50.0 < tt_fwhm [TTSPEC:FLTPOSFWHM]< 300.0

This was selected based on one experiment (cxii2415, run 65) and cross-validated against another (cxij8816, run 69).

Always Calibrate! Why and How.

As previously mentioned, the timetool trace edge is found and reported in pixels along the OPAL camera (e.g. arbitrary spatial units), and must be converted into a time delay (in femtoseconds). Because the TT response is a function of geometry, and that geometry can change even during an experiment due to thermal expansion, changing laser alignment, different TT targets, etc, frequent calibration is recommended. A good baseline recommendation is to do it once per shift, and then again if something affecting the TT changes.

To calibrate, we change the laser delay (which affects the white light) while keeping the delay stage for the TT constant. This causes the edge to transverse the camera, going from one end to the other, as the white light delay changes due to the changing propagation length. Because we know the speed of light, we can figure out what the change in time delay was, and use that known value to calibrate how much the edge moves (in pixels) for a given time delay change.

Two practicalities to remember:

  1. There is inherent jitter in the arrival time between x-rays and laser (remember, this is why we need the TT!). So to do this calibration we have to average out this jitter across many shots. The jitter is typically roughly Gaussian, so this works.
  2. The delay-to-edge conversion is not generally perfectly linear. In common use at LCLS is a 2nd order polynomial fit (phenomenological) which seems to work great.

Here's a typical calibration result from CXI, with vetos applied:

Image Removed

FIT RESULTS
fs_result = a + b*x + c*x^2,  x is edge position
------------------------------------------------
a = -0.001196897053
b = 0.000003302866
c = -0.000000001349
------------------------------------------------
fit range (tt pixels): 200 <> 800
time range (fs):       -0.000590 <> 0.000582
------------------------------------------------

...

Info
titleWhere do the filter weights come from?
From Matt Weaver 8/23/17: (including extra thoughts transcribed by cpo from Matt on 02/26/20)

The spectrum ratio is treated as a waveform, and that waveform is analyzed with a Wiener filter in the "time" domain.  It is actually a matched filter where the noise is characterized by taking the ratio of non-exposed spectra from different images - should be flat but the instability of the reference optical spectrum causes it to have correlated movements.  So, the procedure for producing those matched filter weights goes like this:

  1. Collect several non-xray exposed spectra ("dropped shots" or "event code 162")
  2. Define an ROI (e.g. 50 pixels) around the signal region. use this ROI for all calculations below, although it has to move to follow the location when the average signal is computed (step 5). Matt says it's not so scientific how one obtains this "averaged signal": one could find several events, center the edge in a window, and then take an average of as many events as it takes for it to "look good". Or just take one event and smooth out the background somehow. He did the latter when he computed the weights.
  3. for each background shot divide by the background averaged over many shots ("normalize")
  4. calculate the auto-correlation function ("acf") of those normalized waveforms. One correlates each sample with each sample n-away in the same waveform creating an array of the same size as the input waveform. The acf is the average of all those products.
  5. Collect your best averaged ratio (signal divided by averaged background) with signal; this averaging smooths out any non-physics features

There's a script (https://github.com/lcls-psana/TimeTool/blob/master/data/timetool_setup.py) which shows the last steps of the calibration process that produces the matched filter weights from the autocorrelation function and the signal waveform.  That script has the auto-correlation function and averaged-signal hard-coded into it, but it shows the procedure.  It requires some manual intervention to get a sensible answer, since there are often undesirable features in the signal that the algorithm picks up and tries to optimize towards. The fundamental formula in that script is weights=scipy.linalg.inv(scipy.linalg.toeplitz(acf))*(averaged_signal).

The above procedure optimizes the filter to reject the background. Matt doesn't currently remember a physical picture of why the "toeplitz" formula optimizes the weights to reject background. If one wants to simplify by ignoring the background suppression optimization, the "average signal" (ignoring the background) can also be used as a set of weights for np.convolve.

The astute reader will notice that this trace has no etalon wiggles. That is because it has been cleaned up by subtracting an x-ray off shot (BYKICK). Those events have the same etalon effect, but no edge – subtracting them removes the etalon. That's a good thing, because the etalon wiggles would have given this method a little trouble if they were big in amplitude.

So, to summarize, here is what the DAQ analysis code does:

  1. Extract the TT trace using an ROI + projection
  2. Subtract the most recent x-ray off (BYKICK) trace
  3. Convolve the trace with a filter
  4. Report the position, amplitude, FWHM of the resulting peak

The reported position will be in PIXELS. That is not so useful! To convert to femtoseconds, we need to calibrate the TT.

Info
titleImportant Note on How to Compute Time Delays

The timetool only gives you a small correction to the laser-xray delay. The "nominal" delay is set by the laser, which is phase locked to the x-rays. The timetool measures jitter around that nominal delay. So you should compute the final delay as:

delay = nominal_delay + timetool_correction

Since different people have different conventions about the sign that corresponds to "pump early" vs. "pump late" you must exercise caution that you are doing the right thing here. Ensure that things are what you think they are. If possible, figure this out before your experiment begins, or early in running, and write it down. Force everyone to use the same conventions. Especially if you are on night shift :).

The "nominal delay" should be easily accessible as a PV. Unfortunately, it will vary hutch-to-hutch. Sorry. Ask your beamline scientist or PCDS PoC for help.

How to get the results

Results can be accessed using the following epics-variables names:

TTSPEC:FLTPOSthe position of the edge, in pixels
TTSPEC:AMPLamplitude of biggest edge found by the filter
TTSPEC:FLTPOSFWHMthe FWHM of the edge, in pixels
TTSPEC:AMPLNXTamplitude of second-biggest edge found by the filter, good for rejecting bad fits
TTSPEC:REFAMPLamplitude of the background "reference" which is subtracted before running the filter algorithm, good for rejecting bad fits
TTSPEC:FLTPOS_PSthe position of the edge, in picoseconds, but requires correct calibration constants be put in the DAQ when data is acquired. Few people do this. So be wary.

These are usually pre-pended by the hutch, so e.g. at XPP they will be "XPP:TTSPEC:FLTPOS". The TT camera (an OPAL) should also be in the datastream.

Here is an example of how to get these values from psana

Code Block
languagepy
firstline1
titleAccessing TT Data in psana (python)
linenumberstrue
import psana

ds = psana.MPIDataSource('exp=cxij8816:run=221')

evr       = psana.Detector('NoDetector.0:Evr.0')
tt_camera = psana.Detector('Timetool')
tt_pos    = psana.Detector('CXI:TTSPEC:FLTPOS')
tt_amp    = psana.Detector('CXI:TTSPEC:AMPL')
tt_fwhm   = psana.Detector('CXI:TTSPEC:FLTPOSFWHM')

for i,evt in enumerate(ds.events()):
    tt_img = tt_camera.raw(evt)
	# <...>

How to know it did the right thing

Cool! OK, so how do you now if it worked? Here are a number of things to check:

  1. Plot some TT traces with the edge position on top of them, and make sure the edge found is reasonable.
  2. Look at a "jitter histogram", that is the distribution of edge positions found, for a set time delay. The distribution should be approximately Gaussian. Not perfectly. But it should not be multi-modal.
  3. Do pairwise correlation plots between TTSPEC:FLTPOS / TTSPEC:AMPL / TTSPEC:FLTPOSFWHM, and ensure that you don't see anything "weird". They should form nice blobs – maybe not perfectly Gaussian, but without big outliers, periodic behavior, or "streaks".
  4. Analyze a calibration run, where you change the delay in a known fashion, and make sure your analysis matches that known delay (see next section).
  5. The gold standard: do your full data analysis and look for weirdness. Physics can be quite sensitive to issues! But it can often be difficult to trouble shoot this way, as many things could have caused your experiment to screw up.

Outliers in the timetool analysis are common, and most people typically throw away shots that fall outside obvious "safe" regions. That is totally normal, and typically will not skew your results.

Tip
titleFiltering TT Data

If you make a plot of the measured timetool delay it is quite noisy (e.g. the laser delay vs. the measured filter position, TTSPEC:FLTPOS, while keeping the TT stage stationary). There are many outliers. This can be greatly cleaned up by filtering based on the TT peak.

I (TJ, <tjlane@slac.stanford.edu>) have found the following "vetos" on the data to work well, though they are quite conservative (throw away a decent amount of data). That said they should greatly clean up the TT response and has let me proceed in a "blind" fashion:

Require:

tt_amp [TTSPEC:AMPL] > 0.05 AND

50.0 < tt_fwhm [TTSPEC:FLTPOSFWHM]< 300.0

This was selected based on one experiment (cxii2415, run 65) and cross-validated against another (cxij8816, run 69).

Always Calibrate! Why and How.

As previously mentioned, the timetool trace edge is found and reported in pixels along the OPAL camera (e.g. arbitrary spatial units), and must be converted into a time delay (in femtoseconds). Because the TT response is a function of geometry, and that geometry can change even during an experiment due to thermal expansion, changing laser alignment, different TT targets, etc, frequent calibration is recommended. A good baseline recommendation is to do it once per shift, and then again if something affecting the TT changes.

To calibrate, we change the laser delay (which affects the white light) while keeping the delay stage for the TT constant. This causes the edge to transverse the camera, going from one end to the other, as the white light delay changes due to the changing propagation length. Because we know the speed of light, we can figure out what the change in time delay was, and use that known value to calibrate how much the edge moves (in pixels) for a given time delay change.

Two practicalities to remember:

  1. There is inherent jitter in the arrival time between x-rays and laser (remember, this is why we need the TT!). So to do this calibration we have to average out this jitter across many shots. The jitter is typically roughly Gaussian, so this works.
  2. The delay-to-edge conversion is not generally perfectly linear. In common use at LCLS is a 2nd order polynomial fit (phenomenological) which seems to work great.

Here's a typical calibration result from CXI, with vetos applied:

Image Added

FIT RESULTS
fs_result = a + b*x + c*x^2,  x is edge position
------------------------------------------------
a = -0.001196897053
b = 0.000003302866
c = -0.000000001349
------------------------------------------------
fit range (tt pixels): 200 <> 800
time range (fs):       -0.000590 <> 0.000582
------------------------------------------------

Unfortunately, right now CXI, XPP, and AMO have different methods for doing this calibration. Talk to your beamline scientist about how to do it and process the results.

Conversion from FLTPOS pixels into ps

Once you do this calibration, it should be possible to write a simple function to give you the corrected delay between x-rays and optical laser. For example, the following code snip was used for an experiment at CXI. NOTE THIS MAY CHANGE GIVEN YOUR HUTCH AND CONVENTIONS. But it should be a good starting point (smile).

Code Block
languagepy
titleConversion of FLTPOS into PS delay
def relative_time(edge_position):
    """
    Translate edge position into fs (for cxij8816)
    
    from docs >> fs_result = a + b*x + c*x^2, x is edge position
    """
    a = -0.0013584927458976459
    b =  3.1264188429430901e-06
    c = -1.1172611228659911e-09
    x = tt_pos(evt)
    tt_correction = a + b*x + c*x**2
    time_delay = las_stg(evt)
    return -1000*(time_delay + tt_correction)


Rolling Your Own

Hopefully you now understand how the timetool works, how the DAQ analysis works, and how to access and validate those results. If your DAQ results look unacceptable for some reason, you can try to re-process the timetool signal. If, right now, you are thinking "I need to do that!", you have a general idea of how to go about it. If you need further help, get in touch with the LCLS data analysis group. In general we'd be curious to hear about situations where the DAQ processing does not work and needs improvement.

...

  1. It is possible to re-run the DAQ algorithm offline in psana, with e.g. different filter weights or other settings. This is documented extensively.
  2. There is some experimental python code for use in situations where the etalon signal is very strong and screws up the analysis. It also simply re-implements a version of the DAQ analysis in python, rather than C++, which may be easier to customize. This is under active development and should be considered use-at-your-own-risk. Get in touch with TJ Lane <tjlane@slac.stanford.edu> if you think this would be useful for you.

 

  1. -own-risk. Get in touch with TJ Lane <tjlane@slac.stanford.edu> if you think this would be useful for you.

References

https://opg.optica.org/oe/fulltext.cfm?uri=oe-19-22-21855&id=223755

https://www.nature.com/articles/nphoton.2013.11

https://opg.optica.org/oe/fulltext.cfm?uri=oe-28-16-23545&id=433772