Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • DAQ interpolation.  Ugly DAQ C++, may want to change later.  Works for AMI
    • "freezes" the interpolation to what is done at DAQ time: but doesn't preclude someone going back and doing a better analysis later
    • what if what we think is noise is actually real vibration of the mono grating.  in that case interpolation loses important information.  one would test this by doing the analysis both with the raw data and with "smoothed" data to see which yields better physics resolution
  • only send events with measurements to AMI? requires a ric-style python script to select AMI events
    • limits statistics in AMI.  in principle could get 5kHz.
    • in future could avoid limited-statistics issue by going to 1MHz relative encoder
    • cpo worry: we already need such a script for RIX to send the andor events to AMI.  two issues:
      • that script becomes more complex trying to do two things
      • cpo thinks: if the (rare) andor events don't have an encoder value it will make the analysis (or ami display) tricky.  would need to have "trigger overlap"
  • (doesn't work for real-time analysis like ami) psana interpolation in-memory
    • on SMD0 core (would dramatically complicate our most complicated psana event-builder code)
    • psana "broadcasts" all the encoder data to all the cores.  quite messy and would affect performance unless SlowUpdate broadcast at 10Hz was feasible
  • (doesn't work for real-time analysis like ami) psana pre-processing interpolation written to a new xtc2 file
    • could also analyze the shape above and write out very little information (10 numbers?): regions 1 (flat),2(sloped),3(flat),4(sloped),5(flat). 
  • (doesn't work because of chaos caused by batching, deadtime, arbitrary number of cores, load-balancing) DAQ repeats without interpolation.
    • Ideally xtc2 would be able to say 3 (repeats 5 times) 7 (repeats 5 times).  xtc2 can't say this easily.
    • (preferred) or another version would be 3(deltatime=0),3(+deltatime),3(+deltatime),3,3,7,7,7,7,7,.... each core could do it's own interpolation.  provides flexibility and scalability.  extra data volume isn't too bad.  will make user code more complex (they will have to "buffer")
    • Chris Ford's thought on this "debinning":  Encoder Debinning

Toy Example of DAQ Interpolation Option


need matrix inversions (linear-regression for a polynomial fit) using eigen software? Example timestamps/values:

Code Block
ts  1 2 3 4 5 6  7
val 5   7   9   11


multithreaded option

core 0 is handed event ts=2 (fit vals=[5,7] to get answer 6)

worry about:
1) when can we free the memory at the beginning of the circular buffer?
2) core 0 needs to wait for ts=3,val=7 data to show up before doing the fit

two possible drp versions:

  • high-rate drp hands out different events to different cores (e.g. 60)
  • simple drp code that is more "single-threaded"

what version is the encoder code? copied PVA detector perhaps (75% confident)?

single threaded option

if encoder is "simple" version above.

why single thread should be OK:

  • fits only need to be done at a low "real encoder value" rate (100Hz->5kHz) multi-threading isn't necessary.
  • polynomial (quadratic or 3rd-order?) calculations a+bx+cx**2+dx**3 (plus have to compute the time "x") have to be done at 1MHz.  hopefully can avoid multithreading as well, at least up to 100kHz

main thread algorithm:

  • wait for ts=3
  • launch fit subthread (or maybe don't  with ts=[1,3] to calculate result for ts=2 this addresses worry (2) above
  • main thread watches for completion of the fit-subthread and knows when to "delete" ts=1  (addresses worry (1) above)
  • main thread hands out ts=[3,5] for the next fit

starting plan:

  • standalone C++ code first to test 5kHz fits with eigen and 100kHz polynomial calculations
  • order of polynomial would be "#define" or command-line-option
  • another #define would be number of points to include in each linear-regression