You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 35 Next »

Introduction

XTCAV is a detector that is used to determine the laser-power vs. time of each LCLS shot.  Alvaro Sanchez-Gonzalez has written psana-python code to do the rather complex analysis of images from the XTCAV camera to determine these quantities.  Some detailed documentation from Tim Maxwell on this device is Here.

These scripts use some XTCAV data that was made public so they should be runnable by all users.  The scripts can be found in /reg/g/psdm/tutorials/examplePython/xtcav/ in the files xtcavDark.py, xtcavLasingOff.py, xtcavLasingOn.py.  They analyze a minimal number of events to make them run fairly quickly.

Analysis Setup

Two things must be done before XTCAV analysis will function: a "dark run" must be analyzed to get the pedestal values for cameras, and a "no lasing" run must be analyzed to generate sets of "no lasing" images (the latter is quite a complex process).  An example of a dark-run analysis is in /reg/g/psdm/tutorials/examplePython/xtcav/xtcavDark.py:

#!/usr/bin/env python

# these two lines for example purposes only, to allow user to write
# calibration information to local directory called "calib".
# should be deleted for real analysis.
import psana
psana.setOption('psana.calib-dir','calib')

from xtcav.GenerateDarkBackground import *
GDB=GenerateDarkBackground();
GDB.experiment='xpptut15'
GDB.runs='102'
GDB.maxshots=10  #small number for this example, people often use 1000 shots for this.
GDB.SetValidityRange(101,125) # delete second run number argument to have the validity range be open-ended ("end")
GDB.Generate();

An example of a non-lasing run is in /reg/g/psdm/tutorials/examplePython/xtcav/xtcavLasingOff.py:

#!/usr/bin/env python

# these two lines for example purposes only, to allow user to write
# calibration information to local directory called "calib".
# should be deleted for real analysis.
import psana
psana.setOption('psana.calib-dir','calib')
 
from xtcav.GenerateLasingOffReference import *
GLOC=GenerateLasingOffReference();
GLOC.experiment='xpptut15'
GLOC.runs='101'
GLOC.maxshots=2  #small number for this example, people often use 1400 shots for this.
GLOC.nb=2
GLOC.islandsplitmethod = 'scipyLabel'       # see confluence documentation for how to set this parameter
GLOC.groupsize=5             # see confluence documentation for how to set this parameter
GLOC.SetValidityRange(101,125) # delete second run number argument to have the validity range be open-ended ("end")
GLOC.Generate();

This script can be quite slow.  It can be easily run in parallel by submitting a parallel MPI job to the batch system as described here, however you should change the above script to increase the "maxshots" parameter (so that each core has at least 1 shot to process).  People often use ~1400 shots for this.

Once the dark/lasing-off analysis has been completed, users can analyze the lasing-on events using a standard psana-python script similar to the one below.

Example Analysis Script

This script assumes that dark/lasing-off data has been analyzed (see above).   This script can be found in /reg/g/psdm/tutorials/examplePython/xtcav/xtcavLasingOn.py:

import psana

# this line is for example purposes only, to allow user to read
# calibration information from local directory called "calib".
# should be deleted for real analysis.  It also sets a flag
# ("allow-corrupt-epics") that allows the old example data to be analyzed.
psana.setOptions({'psana.calib-dir':'calib',
                  'psana.allow-corrupt-epics':True})

from xtcav.ShotToShotCharacterization import *
experiment='xpptut15'  #Experiment label
runs='124'             #Runs
#Loading the dataset from the "dark" run, this way of working should be compatible with both xtc and hdf5 files
dataSource=psana.DataSource("exp=%s:run=%s:idx" % (experiment,runs))
#XTCAV Retrieval (setting the data source is useful to get information such as experiment name)
XTCAVRetrieval=ShotToShotCharacterization();
XTCAVRetrieval.SetEnv(dataSource.env())
for r,run in enumerate(dataSource.runs()):
    times = run.times()
    for t in times:
        evt = run.event(t)
        if not XTCAVRetrieval.SetCurrentEvent(evt):
            continue
        time,power,ok=XTCAVRetrieval.XRayPower()  
        agreement,ok=XTCAVRetrieval.ReconstructionAgreement()

This script runs on one core, but it can be MPI-parallelized in the standard psana-python manner described here.

How Often to Take a Lasing Off Run

(courtesy of Tim Maxwell)

That's a very good question. For very stable accelerator conditions, you might not really need to but every hour or two. But, for example with that AMO experiment it drifted measurably over as short as twenty minutes as the beam was a trick setup and some feedbacks needed disabling.

There's not really a hard, fast rule here. When necessary or when time allows has been the practical answer so far.

Lasing-off Analysis Parameters

(courtesy of Alvaro Sanchez-Gonzalez)

-NB: Number of bunches. This one is obvious. Just as a note for the future, if at some point you want to do two bunch partial reconstruction (i.e. without lasing off reference), you will need to specify explicitly the number of bunches in an undocumented way (i.e. changing an attribute without a get/set method. Example:

XTCAVRetrieval=ShotToShotCharacterization();
XTCAVRetrieval.SetDataSource(dataSource)
XTCAVRetrieval._nb=2
If you do this, but then it happens that there is a reference available,
and the number of bunches is different, it will take the number of
bunches in the reference file.

-n: number of images: in principle the bigger the better, but around 1400 used to work fine. If you select a bigger number, you will get more references, but it will also take longer to find the right reference shot to shot.

-groupsize: 5 is a good number. Increasing this number will make better references, but they may also make them less accurate, as you would be averaging a larger number of profiles. If you decide to increase this number by a factor, then I would also increase n by the same factor, so in the end the total number of reference is the same. (see also thoughts from Tim Maxwell below).

-islandsplitmethod: Several image processing algorithms have been created to separate bunches that appear on the same image. Each algorithm is advantageous in certain types of conditions and thus it might be worth trying all of them and see what achieves the best results.  The defaults parameter is 'scipylabel'. If one does not set islandsplitmethod to anything, the parameter will be set to 'scipylabel'. If one would like to change from the default , one can use GLOC.islandsplitmethod = 'contourLabel'.  'scipylabel' calls the scipy label function to label contiguous regions. 'contourlabel' tries to adjust a threshold until two large groups are found, grouping together pixels using an opencv contour method.  there are two parameters ('ratio1/ratio2') that are settable for the contour method.

-roexpand and roiwaistthres: Call the method 'ProcessedXTCAVImage'. If the image does not look clipped, the parameters are fine.

-snrfilter: this one is really sensitive for two bunches, because it sets a threshold based on the noise to higher as the parameter goes up. This means that it is the factor that can separate the two island in case of two bunches, so for two bunch it needs some tweaking until the two bunches are separated for most of the shots. Bottom line:

  • For single bunch: As low as possible while still removing the noise (i.e.) not getting noise current profiles.
  • For double bunch: As low as possible while separating the two bunches. Call the 'ProcessedXTCAVImage' method again, and look at the size of axis 0: this will be the number of bunches detected. You can probably write a program to automatize the determination of this parameter, until you get lets say a 95% of two bunch detection.

Thoughts from Tim Maxwell on Number of Groups

Two cases:

  • normal sase (not many beam-related fluctuations): take sqrt of number shots and use that many groups (beat down background noise with lots of averaging).   Tim suggests perhaps a maximum of 100 groups.
  • slotted foil (many beam-related of fluctuations): restrict it to 5-10 images per group, make as many groups as possible using typically 30 seconds to a minute of data at 60Hz.
  • seeded beam: For any results, I would use the SASE settings. However, note that reconstruction for seeding is a little ambiguous. The beam first seeds, so is partly spoiled before seeding itself in the second stage. Therefore it isn't clear exactly with one lasing off reference which part lased more for the seeded part. It is similar to the case when one bunch is used to make two pulses with the split undulator. However, if seeding amplification was strong and intense, this may be enough.

120Hz Operation Issues

(Thoughts from Tim Maxwell on 2/2/2015, discouraging this mode of operation)

This was an option. However it restricts the vertical energy ROI. For soft xrays it would easily and frequently clip the image of the beam as well as require very careful ROI management, and so may have compromised random images for your data.

Another largely untested option are other running conditions recently found. This appears to delay tagging of all images by one event (3 fiducials). However, we are absolutely not certain this behavior is consistent and therefore don't recommend it for critical data.

Detector Resolution

(From Tim Maxwell)

Time resolution is around 1.1 fs RMS for soft x-rays, 2.5 fs fwhm (in quadrature, of course). So actually pulse length is probably 4.3 - 9.7 fs FWHM.

This also doesn't include the "slippage resolution." That is, if they're using the full undulator, then by the end the x-rays can have slipped out of the electron slice by ~3 fs for soft x-rays. Obviously not a small number if trying to make 5 fs pulses. They've been advised to not use the full undulator when shorter pulses are more important than number of photons.

  • No labels