Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

* Ti-Yen
* halo close to beamcenter makes hitfinding diffcult for SPI
* converting ADU to photons is sufficient for SPI

* Aaron Brewster
* reprocessing because of unkown crystal unit cell
* unit cell can drift during the experiment, depending on sample preperation

* Peter Zwart
* hitrate can be quite high for imaging can be up to 80%
* clustering algorithm for hitfinding
* stable beam and sample delivery required
* difficulties in converting ADU to photons for pnccd, rounding errors
* check photon conversion on simulated data, understand errors in conversion

 

Extra description from Anton Barty:

 

The question was: when do you of back over all the data.
1) To fix an error or artefact in the detector for which there was no ready-made correction prior to beamtime (or we did not know that error existed). 
Examples may be: cspad common mode and kickback corrections, pnccd timing distorting the geometry, gain/intensity nonlinearities, timing tool edge finding needing careful attention 
Corollary: For real-time analysis to work, detector output needs to 100% reliable
2) Where there are parameters to tweak in the analysis, no doubt they will want to be tweaked.  This is particularly the case when there is unexpected signal, or no signal at all.  No signal is hard because we have to convince ourselves that the analysis algorithms are not throwing out useful data. 
Nadia: Going back over the data to get an extra 10% can improve data enough to get a result, as opposed to no result. 
Corollary: Algorithms should not rely on adjustable parameters such as thresholds.  If it’s adjustable you will want to see the effect of adjusting it, which means going back over the data.  
Tom: An adjustable parameter you get one shot at setting is no longer an adjustable parameter. 
3) Unexpected features in the data:  including unexpected regions of interest or regions of integration, bad regions, stray reflections, integration directions, calibrations.
For example: shadows on the detector,  stray light sneaking past or through apertures, unexpected parasitic scatter. 
Corollary: Instant feedback is essential so the user can perfect these regions in real time.  Expect to use some beamtime and sample to get this right. 
4) Experimental SNAFUs. For example, primary sorting diagnostic not working and need to go to secondary diagnostic. 
Example: Event code not recorded, have to look at Aquiris trace or a CCD camera to determine whether the pump laser was on or off. 
Corollary: Once again, instant feedback is essential so the user can perfect these regions in real time.  Expect to use some beamtime and sample to get this right.  Someone must be there to be able to re-program this in real time. Software setup as important as sample delivery and beamline expertise. 
One can make the following observations: 
- If there is an adjustable parameter, users will want to see the effect of adjusting it. Move towards reliable algorithms that do not have user adjustable settings, then there is nothing to tweak. 
- Setting up the software (e.g.: thresholds, calibration) becomes as critical a step as aligning the beam, moving apertures and mirrors, perfecting sample delivery.  
- Actual beamtime and sample needs to be budgeted for setup of the online analysis with real sample.   Real time analysis becomes a part of the instrument, not a step performed afterwards.  
- Fast feedback so that regions can be adjusted in real time is essential.  You can’t analyse blind. 
- Accept some beamtime may be lost due to real time analysis problems, just as some beamtime is lost due to sample delivery or vacuum issues. Analysis equivalent of ‘hutch door open’. 
- All analysis must be monitored and reprogrammed in real time.  LCLS will have to understand a lot more about each experiment to be able to provide the necessary support in real time, at all hours. Record and figure it out later no longer possible.

Feb. 22, 2017

Link to slides: 

...