Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

 

XTCAV is a detector that is used to determine the laser-power vs. time of each LCLS shot. Some detailed documentation from Tim Maxwell on this device is Here. Alvaro Sanchez-Gonzalez authored the original psana-python code to do the rather complex analysis of images from the XTCAV camera to determine these quantities. This current version has been updated to run more quickly and to fix some analysis errors.

These scripts use some XTCAV data that was made public so they should be runnable by all users. The scripts can be found in /reg/g/psdm/tutorials/examplePython/xtcav/ in the files xtcavDark.py, xtcavLasingOff.py, xtcavLasingOn.py.

Analysis Setup

Two things must be done before XTCAV analysis will function: a "dark run" must be analyzed to get the pedestal values for cameras, and a "no lasing" run must be analyzed to generate sets of "no lasing" images (the latter is quite a complex process). Note that for demonstration these first two scripts write constants to a "local" calibration directory called "calib". For a real-experiment you won't need these lines because you will have permission to write to your official experiment calibration-constants directory.

Sample of dark run analysis:

Code Block
languagepython
# these first two lines for example purposes only, to allow user to write
# calibration information to local directory called "calib"
# should be deleted for real analysis.
import psana
psana.setOption('psana.calib-dir','calib')
from xtcav.DarkBackground import *

DarkBackground(experiment='xpptut15', 
	run_number='300',   # run number within experiment
	max_shots=1000,		# maximum number of shots to process
	validity_range=(300,302))  # range of runs for which this dark run should be used

An example of a non-lasing run is in /reg/g/psdm/tutorials/examplePython/xtcav/xtcavLasingOff.py:

Code Block
languagepython
# these two lines for example purposes only, to allow user to write
# calibration information to local directory called "calib"
# should be deleted for real analysis.
import psana
psana.setOption('psana.calib-dir','calib')
from xtcav.GenerateLasingOffReference import *
GLOC=GenerateLasingOffReference();
GLOC.experiment='xpptut15'
GLOC.runs='301'
GLOC.maxshots=1400
GLOC.nb=1
GLOC.islandsplitmethod = 'scipyLabel'       # see confluence documentation for how to set this parameter
GLOC.groupsize=40             # see confluence documentation for how to set this parameter
GLOC.SetValidityRange(300,302) # delete second run number argument to have the validity range be open-ended ("end")
GLOC.Generate();

This script can be quite slow.  It can be easily run in parallel by submitting a parallel MPI job to the batch system as described here.  Note that if you analyze multiple runs, you must call GLOC=GenerateLasingOffReference() once per run to get the correct constants.

Once the dark/lasing-off analysis has been completed, users can analyze the lasing-on events using a standard psana-python script similar to the one below.

Example Analysis Script

This script assumes that dark/lasing-off data has been analyzed (see above).   Unlike the previous two scripts it reads dark/lasing-off constants from the official calibration-directory.  This script can be found in /reg/g/psdm/tutorials/examplePython/xtcav/xtcavLasingOn.py:

Code Block
languagepython
from psana import *
from xtcav.ShotToShotCharacterization import *
ds=DataSource('exp=xpptut15:run=302:smd')
XTCAVRetrieval=ShotToShotCharacterization();
XTCAVRetrieval.SetEnv(ds.env())
import matplotlib.pyplot as plt
ngood = 0
for evt in ds.events():
    if not XTCAVRetrieval.SetCurrentEvent(evt): continue
    time,power,ok=XTCAVRetrieval.XRayPower()  
    if not ok: continue
    agreement,ok=XTCAVRetrieval.ReconstructionAgreement()
    if not ok: continue
    ngood += 1
    # time and power are lists, with each entry corresponding to
    # a bunch number.  The number of bunches is set by the GLOC.nb
    # parameter in the lasing-off analysis.  In general, one should
    # also cut on the "agreement" value, which measures the agreement
    # between the first and second moment analysis (larger is better).
    plt.plot(time[0],power[0])
    plt.xlabel('Time (fs)')
    plt.ylabel('Lasing Power (GW)')
    plt.title('Agreement %4.2f'%agreement)
    plt.show()
    if ngood>1: break

This script runs on one core, but it can be MPI-parallelized in the standard psana-python manner described here.

One caveat: this data shows "horns" at the beginning and the end of the bunch which confuse the algorithm (the accelerator often behaves in this manner).  Only the middle peak in the plotted spectrum is the relevant lasing peak.

Some tips for lasing-on analysis:

  • Look at the distribution of the "agreement" parameter that is returned by the ReconstructionAgreement() method.  This value represents the "dot product" of the power-spectrum from the first-moment-analysis of the XTCAV image with the power-spectrum from the second-moment-analysis of the XTCAV image.   Only believe the data where the agreement is good (in the past >0.5 has been useful for some analyses)
  • Ignore shots where the X-ray intensity is low, but cutting on the FEEGasDetector value to select stronger shots

Examining The "Ingredients" of the XTCAV Analysis

This is a utility that can help users understand why XTCAV results look the way they do.   Run it like this:

Code Block
 xtcavDisp exp=xpptut15:run=302

It produces plots that look like this:

Image Added

These plots show the following quantities for both lasing-on shot that is being analyzed and the lasing-off shot that it is being compared to (which is selected as the one having the closest current-profile):

  • Current
  • Energy computed using the first-moment ("Delta") method
  • Energy computed using the second-moment ("Sigma") method
  • Power

Close the existing plot window to show the plots for the next event.  If you want to modify the 105-line "xtcavDisp" python script (e.g. to change which events are displayed) you can find it in  /reg/g/psdm/sw/releases/ana-current/xtcav/src/xtcavDisp.

 

How Often to Take a Lasing Off Run

(courtesy of Tim Maxwell)

For very stable accelerator conditions, you don't really need to update but every hour or two. However, with some special configurations it can drift measurably over as short as twenty minutes. That's typically for cases where we have to leave feedbacks open (two bucket and some twin bunch configurations).

There's not really a hard, fast rule here. Definitely when necessary due to known changes in beam conditions and otherwise as often as time allows is the practical answer so far.

Lasing-off Analysis Parameters

(courtesy of Alvaro Sanchez-Gonzalez)

-NB: Number of bunches: typically 1, but sometimes 2 for some LCLS experiments.  Just as a note for the future, if at some point you want to do two bunch partial reconstruction (i.e. without lasing off reference), you will need to specify explicitly the number of bunches in an undocumented way (i.e. changing an attribute without a get/set method. Example:

XTCAVRetrieval=ShotToShotCharacterization();
XTCAVRetrieval.SetDataSource(dataSource)
XTCAVRetrieval._nb=2
If you do this, but then it happens that there is a reference available, 
and the number of bunches is different, it will take the number of 
bunches in the reference file.

-n: number of images: in principle the bigger the better, but around 1400 used to work fine. If you select a bigger number, you will get more references, but it will also take longer to find the right reference shot to shot.

-groupsize: 5 is a good number. Increasing this number will make better references, but they may also make them less accurate, as you would be averaging a larger number of profiles. If you decide to increase this number by a factor, then I would also increase n by the same factor, so in the end the total number of reference is the same. (see also thoughts from Tim Maxwell below).

-islandsplitmethod: (default value: 'scipylabel') Several image processing algorithms have been created to separate bunches that appear on the same image.  See here for some algorithm details.

  • 'scipyLabel' calls the scipy label function to label contiguous regions.  this is fast, but requires a region of no signal (after de-noising) between the regions.
  • 'autothreshold' tries to separate the islands by automatically finding thresholds that separate them.  this is the recommended mode for multi-bunch operation.
  • 'contourLabel' tries to adjust a threshold until two large groups are found, grouping together pixels using an opencv contour method.  there are two parameters ('ratio1/ratio2') that are settable for the contour method that determine

-roexpand and roiwaistthres: Call the method 'ProcessedXTCAVImage'. If the image does not look clipped, the parameters are fine.

-snrfilter: this one is really sensitive for two bunches, because it sets a threshold based on the noise to higher as the parameter goes up. This means that it is the factor that can separate the two island in case of two bunches, so for two bunch it needs some tweaking until the two bunches are separated for most of the shots. Bottom line:

  • For single bunch: As low as possible while still removing the noise (i.e.) not getting noise current profiles.
  • For double bunch: As low as possible while separating the two bunches. Call the 'ProcessedXTCAVImage' method again, and look at the size of axis 0: this will be the number of bunches detected. You can probably write a program to automatize the determination of this parameter, until you get lets say a 95% of two bunch detection.

Thoughts from Tim Maxwell on Number of Groups

For various beam conditions (note that Tim describes number-of-groups, but our settable parameter is "groupsize"=number-of-events/number-of-groups):

  • normal sase (not many beam-related fluctuations): take sqrt of number shots and use that many groups (beat down background noise with lots of averaging).   Tim suggests perhaps a maximum of 100 groups.
  • two-bunch slotted foil (many beam-related of fluctuations): restrict it to 5-10 images per group, make as many groups as possible using typically 30 seconds to a minute of data at 60Hz.
  • split undulator, single bunch, head and tail lase in separate sections of the split undulator: Tim expects this to be somewhat more chaotic than SASE, but not as chaotic as slotted-foil, so some number of groups in between.
  • seeded beam: For any results, I would use the SASE settings. However, note that reconstruction for seeding is a little ambiguous. The beam first seeds, so is partly spoiled before seeding itself in the second stage. Therefore it isn't clear exactly with one lasing off reference which part lased more for the seeded part. It is similar to the case when one bunch is used to make two pulses with the split undulator. However, if seeding amplification was strong and intense, this may be enough.

Detector Resolution

(From Tim Maxwell)

Time resolution is around 1.1 fs RMS for soft x-rays, 2.5 fs fwhm (in quadrature, of course). So actually pulse length is probably 4.3 - 9.7 fs FWHM.

This also doesn't include the "slippage resolution." That is, if they're using the full undulator, then by the end the x-rays can have slipped out of the electron slice by ~3 fs for soft x-rays. Obviously not a small number if trying to make 5 fs pulses. They've been advised to not use the full undulator when shorter pulses are more important than number of photons.Clara edit this