You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

Introduction

XTCAV is a detector that is used to determine the laser-power vs. time of each LCLS shot.  Alvaro Sanchez-Gonzalez has written psana-python code to do the rather complex analysis of images from the XTCAV camera to determine these quantities.  Some detailed documentation from Tim Maxwell on this device is Here.

Analysis Setup

Two things must be done before XTCAV analysis will function: a "dark run" must be analyzed to get the pedestal values for cameras, and a "no lasing" run must be analyzed to generate sets of "no lasing" images (the latter is quite a complex process).  An example of a dark-run analysis is:

#!/usr/bin/env python
from xtcav.GenerateDarkBackground import *
GDB=GenerateDarkBackground();
GDB.experiment='amoc8114'
GDB.runs='85'
GDB.maxshots=1000
GDB.SetValidityRange(85,109) # delete second run number argument to have the validity range be open-ended ("end")
GDB.Generate();

An example of a non-lasing run is:

#!/usr/bin/env python
from xtcav.GenerateLasingOffReference import *
GLOC=GenerateLasingOffReference();
GLOC.experiment='amoc8114'
GLOC.runs='86'
GLOC.maxshots=1401
GLOC.nb=1
GLOC.islandSplitMethod = 'contourLabel'       # see documentation below for how to set this parameter
GLOC.groupsize=5             # see documentation below for how to set this parameter
GLOC.SetValidityRange(86,91) # delete second run number argument to have the validity range be open-ended ("end")
GLOC.Generate();

Once the above has been completed, the user can analyze the lasing-on events.

Example Analysis Script

This script assumes that dark/lasing-off data has been analyzed (see above)

import psana
from xtcav.ShotToShotCharacterization import *
experiment='amoc8114'  #Experiment label
runs='87'              #Runs
#Loading the dataset from the "dark" run, this way of working should be compatible with both xtc and hdf5 files
dataSource=psana.DataSource("exp=%s:run=%s:idx" % (experiment,runs))
#XTCAV Retrieval (setting the data source is useful to get information such as experiment name)
XTCAVRetrieval=ShotToShotCharacterization();
XTCAVRetrieval.SetEnv(dataSource.env())
for r,run in enumerate(dataSource.runs()):
    times = run.times()
    for t in times:
        evt = run.event(t)
        if not XTCAVRetrieval.SetCurrentEvent(evt):
            continue
        time,power,ok=XTCAVRetrieval.XRayPower()  
        agreement,ok=XTCAVRetrieval.ReconstructionAgreement()

How Often to Take a Lasing Off Run

(courtesy of Tim Maxwell)

That's a very good question. For very stable accelerator conditions, you might not really need to but every hour or two. But, for example with that AMO experiment it drifted measurably over as short as twenty minutes as the beam was a trick setup and some feedbacks needed disabling.

There's not really a hard, fast rule here. When necessary or when time allows has been the practical answer so far.

Lasing-off Analysis Parameters

(courtesy of Alvaro Sanchez-Gonzalez)

-NB: Number of bunches. This one is obvious. Just as a note for the future, if at some point you want to do two bunch partial reconstruction (i.e. without lasing off reference), you will need to specify explicitly the number of bunches in an undocumented way (i.e. changing an attribute without a get/set method. Example:

XTCAVRetrieval=ShotToShotCharacterization();
XTCAVRetrieval.SetDataSource(dataSource)
XTCAVRetrieval._nb=2
If you do this, but then it happens that there is a reference available,
and the number of bunches is different, it will take the number of
bunches in the reference file.

-n: number of images: in principle the bigger the better, but around 1400 used to work fine. If you select a bigger number, you will get more references, but it will also take longer to find the right reference shot to shot.

-groupsize: 5 is a good number. Increasing this number will make better references, but they may also make them less accurate, as you would be averaging a larger number of profiles. If you decide to increase this number by a factor, then I would also increase n by the same factor, so in the end the total number of reference is the same. (see also thoughts from Tim Maxwell below).

-islandSplitMethod: Several image processing algorithms have been created to separate bunches that appear on the same image. Each algorithm is advantageous in certain types of conditions and thus it might be worth trying all of them and see what achieves the best results.  The defaults parameter is 'scipylabel'. If one does not set islandSplitMethod to anything, the parameter will be set to 'scipylabel'. If one would like to change from the default , one can use GLOC.islandSplitMethod = 'contourLabel'.

-roexpand and roiwaistthres: Call the method 'ProcessedXTCAVImage'. If the image does not look clipped, the parameters are fine.

-snrfilter: this one is really sensitive for two bunches, because it sets a threshold based on the noise to higher as the parameter goes up. This means that it is the factor that can separate the two island in case of two bunches, so for two bunch it needs some tweaking until the two bunches are separated for most of the shots. Bottom line:

  • For single bunch: As low as possible while still removing the noise (i.e.) not getting noise current profiles.
  • For double bunch: As low as possible while separating the two bunches. Call the 'ProcessedXTCAVImage' method again, and look at the size of axis 0: this will be the number of bunches detected. You can probably write a program to automatize the determination of this parameter, until you get lets say a 95% of two bunch detection.

Thoughts from Tim Maxwell on Number of Groups

Two cases:

  • normal sase (not many beam-related fluctuations): take sqrt of number shots and use that many groups (beat down background noise with lots of averaging).   Tim suggests perhaps a maximum of 100 groups.
  • slotted foil (many beam-related of fluctuations): restrict it to 5-10 images per group, make as many groups as possible using typically 30 seconds to a minute of data at 60Hz.

120Hz Operation Issues

(Thoughts from Tim Maxwell on 2/2/2015, discouraging this mode of operation)

This was an option. However it restricts the vertical energy ROI. For soft xrays it would easily and frequently clip the image of the beam as well as require very careful ROI management, and so may have compromised random images for your data.

Another largely untested option are other running conditions recently found. This appears to delay tagging of all images by one event (3 fiducials). However, we are absolutely not certain this behavior is consistent and therefore don't recommend it for critical data.

Detector Resolution

(From Tim Maxwell)

Time resolution is around 1.1 fs RMS for soft x-rays, 2.5 fs fwhm (in quadrature, of course). So actually pulse length is probably 4.3 - 9.7 fs FWHM.

This also doesn't include the "slippage resolution." That is, if they're using the full undulator, then by the end the x-rays can have slipped out of the electron slice by ~3 fs for soft x-rays. Obviously not a small number if trying to make 5 fs pulses. They've been advised to not use the full undulator when shorter pulses are more important than number of photons.

Algorithm Details

(courtesy of Mihir Mongia)

We believe XTCAVRetrieval.SetCurrentEvent(evt) calls the following three "steps":

Routine ProcessShotStep1:

  • overall goal: subtract dark, denies, signal ROI, bunch split
  • subtract dark image
  • run denoting algorithm median filter
  • median filter (for smoothing):
    • looks at pixel and its neighbors (number can be specified in some manner not yet understood)
    • take median of that set and set the center pixel value to the median
  • look at noise region
  • subtract mean of noise from whole image
  • anything >10 (can be over-ridden) standard deviation of noise, keep it.  < 10 stddevs set it to zero.
  • normalize image so sum=1
  • assumes dark image is larger than the shot image ROI (xmin,xmax,ymin,ymax probably coming from EPICS)
  • looks at max value in normalized image
  • takes all pixels>0.2*max then you "stay" in the ROI.  the software "draws a rectangle" around these pixel to keep ROI rectangular
  • expands rectangle dimensions by 2.5 (from the center) (user-settable) to bring in all interesting pixels into ROI, with a high likelihood
  • calls splitimage (says 'not done' in code?) to handle the bunches
    • this calls IslandSplitting which calls scipy.measurement.label on a boolean image, where the threshold for computing the boolean is zero (this is not really Otsu's method, we believe)
  • splitimage returns a 3D array where first dimension is the bunch (i.e. a set of rectangular ROIs)

ProcessShotStep2

  • overall goal: convert x to time, and y to energy (calibration, probably use EPICS variables)

ProcessShotStep3

  • overall goal: calculate power profile (power, time arrays)
  • calculate center-of-mass vector (1 number per time bin) and ERMS (energy RMS).  this may be related to the current-projection.
  • take the lasing-on image project onto time axis to get the current, and normalize
  • loop through the lasing-off images and do dot-products to find the most similar
  • subtract the lasing-on center-of-mass vector from lasing-off vector to get the power, and similarly for the sigma method (although for some reason the sign of the subtraction is opposite).
  • things not understood: bunch delay
  • calculate the power profile (delta and sigma methods)

Then Call XRayPower

  • averages the delta/sigma results (not weighted)
  • No labels