Jan. 22, 2021

with Alex R., Alex W., Zach L., Jana T., Chris F., Matt W., Silke N., Chris O.

Discussion

georgi: 100eV photon scan range over 100 seconds. 1eV/s. few microeV change per shot at 1MHz. need precision 100 microev? need to take out shot intensity (normalization)

Alex R. says encoder has 100 microev resolution, scan even faster than 1eV/s required.

Corresponds to 10kHz measurements.

silke says that old lcls1 can't be reused. old usdusb one: relative encoder (quadrature pulses) new one: absolute encoder

alex says: new encoder can only read out at 10kHz, maybe 20kHz.

zach says: add a second encoder? relative quadrature encoders faster, maybe a MHz?

silke says: "usb4" relative encoder reader can go up to 5MHz, but maybe not enough space in the vacuum environment. alex found another renishaw that goes up to 15MHz.

zach asks: why not put it on the external shaft? georgi replies not straightforward to use that as the position of the grating. zach/Alex say it could be calibrated

cpo says: interpolate? georgi says the turnaround point would break interpolation. cpo says: mechanical motor turnaround time either fast or slow on timescale of 10kHz sampling should work.  in particular, turnaround time feels likely much slower than 10kHz sampling period?

alex w.: could add time-staggered 10kHz encoders to artificially increase rate

georgi says: need to understand the upper limits. interpolation may be too short-sighted. fast encoder is best.

alex wallace renishaw: https://www.renishaw.com/en/tonic-uhv-incremental-encoder-system-with-rslm20-linear-scale--11155

ankush encoder document: https://docs.google.com/document/d/1Ho1IpT4903wkUoJMV06z5hou9to5CfAgtJBDPdBl6I4/edit?ts=600b806d

      → (Silke) this lists the USB version of the encoder we originally used the PCI version for. We now have a box as Ankush laid out, except that it 'splits' the signal, so has in&out for each encoder channel. This device its listed towards the end of Supported Control & DAQ Devices

Matt says we may have an existing board that could do the readout/timestamping

Summary of Options/Information

  • 10kHz with interpolation?
  • multiple encoders?
    • combine absolute/relative (may be complexity issues)
    • 2 staggered slower encoders
  • need fpga timestamping
  • 100 microev resolution requirement

  • Optionmax ratework to be donedrawbacks/other consideration
    Aabsolute encoder used for relevant stage 10-20(?) kHz

    controls: rework PLC to send data (current solution likely ~1kHz)

    DAQ: likely FPGA timestamping


    B

    absolute encoder used for relevant stage, 

    interleave with second absolute encoder

    20-40 kHzas above

    need to ensure both encoders read back equivalent data,

    intersperse jitter from both?, same latency

    Cadd a relative quadrature encoder on shaft5 or more MHzDAQ: need board to read signal & timestamp

    complicated and possible not very reproducible mapping of measured &

    relevant motion

    Dadd a relative quadrature encoder in vacuum5 or more MHz

    DAQ: need board to read signal & timestamp

    engineering: figure out how to add relative encoder


To do

  • fill out the table options and costs (alex and silke)
  • talk to mechanical engineers (Georgi)
  • meet in two weeks

Feb. 5, 2021

Alex R., Georgi D., Chris F., Jana T., Matt W., Zach L., Joe R., Chris O.

Delay-stage scan discussion:

  • It was decided that for now RIX will not do on-the-fly delay stage scans, instead using standard DAQ step scans of an electronic delay (which incur some dead time when the delay is changed)
  • Georgi reserves the right to revisit this decision in the future

MHz encoder discussion:

  • Feels like option D (see above) is doable according to conversation between Georgi and Danny Morton
    • they meets with Axilon next week to verify (manufacturer of mono assembly))
  • Use this as our current default solution, but can still change (final decision in weeks)
  • Matt W. and Ryan H. think the camlink converter box could be used for readout
    • quadrature encoder "frame format" is understood and relatively simple (absolute encoders are more difficult)
    • quadrature encoder readout can be triggered
    • might need either a new camlink converter board with new connectors or some "connector transition"
  • currently no use case for high rate absolute encoders

120Hz Encoder Plans

- plc is in the fee alcove and the encoder itself is in the fee

- (decision) we put a dedicated daq evr in the controls plc node or controls evr node or drp/ctl node?
- evr gets driven by xpm fiber so trigger stops on endrun
- ttl cable to plc
- both evr and plc use BNC connectors 
- drp executable joins udp packet to timestamp from kcu (reuses TDetSim firmware).  It is new UdpEncoder, which inherits from XpmDetector (Detector Class Diagram)
- (decision) question: do we have udp direct connection or go through the router?  router is easier, but could drop the first packet and be off-by-one (can’t detect off by one with dropped shots). direct? cpo feels we should avoid the router.
- “direct” may mean a switch where we control the traffic?  (ATCA (8 sfp, 8 rj45) or top-of-rack with a new vlan?) could use it for xtcav as well.
- ready "early march”
- clearreadout means:  kcu clears its fifos, drp clears out UDP packets on configure/endrun
- ric idea: IF we could send out 1 trigger on beginrun then we could use that as the starting for future frame counters to detect a missing first packet.  not clear how to send that 1 trigger.

Mail message from Alex:

SXR servers are in R06. There’s likely space in there.
https://docs.google.com/spreadsheets/d/1woH6NHS4IrOvr2zDVZD4R8x73moTXAps1PBqADUauC4/edit#gid=1690903792
 
Our, or at least my understanding was that we can use an trigger output from any nearby host with an EVR. To be specific:
-          PLC: B940-009-R07
-          EVR: B940-009-R06 (whatever host we have running the exit slit camera)
 
I assumed the point/beauty of the EVRs is to distribute these trigger sources so it doesn’t have to originate from any specific node. We were going to include a counter in the payload so we could address any off-by-one issues. I guess it’s more complicated!
 
Sorry, Chris, I didn’t realize this thread had taken up residence elsewhere otherwise I would have responded sooner. I try to keep an eye on the pcds-it list but it’s kind of a blindspot for me.
 
The question I have at this point is, where does the Ethernet/<whatever medium we end up choosing> have to go to deliver the UDP packet?

MHz Conversation with Georgi and Zach (May 4, 2022)

A Jira ticket about this:   LCLSPC-501 - Getting issue details... STATUS

May 4, 2022

  • the absolute encoder will continue to be used
  • perhaps add a second high-rate (maybe even MHz?) relative encoder
  • in november either we have to increase the current absolute encoder rate, or do the "interpolation".  beam starts at a few kHz.
  • feels like a good chance we need interpolation mode since the MHz encoder does not yet exist
  • existing absolute encoder may trigger at 500Hz or 1kHz, but no more
  • existing absolute encoder has estimates for velocity, acceleration, jerk, maybe more? zach can include these in the data stream fairly easily so we can see how well it does
  • for psana we will need to have an encoder readout trigger the "start" of a batch (for parallelization) similar, but different from, integrating detectors where data triggers the "end" of a batch.
    • is there a batching conflict between integrating detectors (e.g. andor) an absolute encoder?  may need to trigger the encoder at a higher-harmonic of the andor?  but maybe still a conflict because one marks the end of the batch while the other marks the beginning?

MHz Discussion (June 10, 2022)

June 10, 2022 dan morton, larry ruckman, matt weaver, nick waters, zach lentz, cpo, chris ford, mona

  • dan has been interfacing with axilon (cologne, germany)
  • stick with renishaw
  • axilon designs brackets and cable routing and provides renishaw (
  • digitized signal comes out
  • lead times long
  • encoder is in the FEE (just east of entrance on massive granite)
  • need Larry/Matt/TID to do the custom timestamping: pcie with FMC and SFP for timing system
  • what are the cable-length constraints?  maybe do a wave8-style box or a dev board (only one of these at the moment?)?  prefer dev board at the moment. comes out as PGP to KCU1500 in the DRP.
  • do we need a second controls interface to this box?
    • requires an "IOC" (more work for controls group)
    • may make it easier for the scientists to use
  • need to be able to reset the counter to zero (to get absolute-encoder style behavior)
  • dan says MHz is overkill, but psana needs MHz.
  • may need errors sent back to DAQ
  • need to decide on a table-top model for TID prototype. looking at page 14 of the manual (and page 7, which shows max speed vs. clock and resolution)
    • Interpolation factor: RIX would like 1nm (20KD).  may decrease range?
    • Alarm format and conditions: A (line driven E output; all alarms)
    • Clocked output option: 50MHz
    • Options: A (This is what axilon suggested, Larry thinks he can adapt)
  • cable going into Larry's box will be as on page 5 of the manual
  • cable lengths on page 9 look good.
  • diagram on page 9 of the manual shows the analog signal getting converted to digital by TONIC interface.  Larry needs to supply 5V.
  • Dan will purchase encoder and second one for Larry prototyping.
  • schedule (need to ask rix scientists). mechanical install in January 2023.
  • account numbers: ask rix scientists
  • ask rix scientists: if we are going to do this, do we still need to "extrapolate" with the existing absolute encoder?

From axilon:

“…the encoder intended to be used is:
-             Read head: T2601-15M or T1611-15M (this depends on availability, the arc can be resolved with a linear and a rotary encoder head, it is actually more a matter of how the scale is mounted and that will be done on an arc)
-             Interface: Ti20KDA06A, that is 20,000x interpolation to get the highest resolution. Note, on the interface one has to check with controls what options they may want to see, for that see below link. What is of importance are the last for letters (i.e. currently the …A06A, this is our usual default):
o            First A: this is for alarm outputs. I have to admit, I have no real clue on that, this needs to come from controls
o            06 is the max output frequency. 6 MHz in this case, often controllers have a limit of what they can take as maximum. The 6MHz is for vibrations check more than sufficient
o            Second A: some options regarding what kind of signals are output, also here this is a controls related question.https://www.renishaw.com/media/pdf/en/257d836362cd4e8189908521185ff4bd.pdf

Second High-Rate Encoder Meeting (Nov. 28, 2022)

(with Ruckman, Herbst, Lentz, Dakovski, Wallace, cpo)

  • Axilon will install in FEE in December
  • Georgi wants it in DAQ by end of 2023
  • Quadrature output, oversample counter at 186MHz, sample that at 1MHz
  • front-end board (not a computer) KCU105 receiving (like wave8)
  • need to order now (16-week lead time)
  • should ask hard x-ray hutches if they want any of these
  • does TID need a test encoder?  they think it's ok to use an oscilloscope to capture a few ticks of the encoder (May Ling wants to do this anyhow to test connections)
  • this encoder will be in the FEE, so not always accessible
  • will absolute encoder plus interpolation be as good as the high rate relative encoder?  answer: we will have to look at the photon "energy resolution".
  • Can the absolute encoder go to higher rates?  Zach writes:

    "With an alternate hardware configuration, the absolute encoder can run faster than we currently run it, but it cannot ever get close to 1MHz.

    So, if the goal is to measure the position at full beam rate, we definitely need the alternate encoder hardware as planned.

    The BiSS-C protocol allows a maximum clock rate of 10MHz, and this is theoretically the maximum data rate in bits. The full position is sent with every BiSS-C cycle. Therefore, we'd need to allocate less than 10 bits per position to have a chance at hitting 1MHz. In practice there are >40 bits per position including status bits, error bits, and checksums. There is also some additional mandatory dead time. There are some documented tests on the internet of running these sorts of encoders up to 33kHz or so, but nothing near the 1MHz mark."

Meeting on Dec. 12, 2022

An email message I sent after the meeting:

- Ken Lauer wrote that AD has some ATCA firmware for reading relative encoders out which may help.  Ken: can you tell us a name we could contact about that?
- Alex (or someone in his group) is going to measure voltage waveforms from the encoder this Friday and will send to you
- I think we’re leaning towards putting the TID box in the FEE, close to where the encoder is.  Alex recommended Mike Estrada as a person who could allocate that rack space for you.  Can you tell us a guess for how many rack spaces you need so Mike can allocate?

Also, we need to patch two fiber pairs (timing, data) to the TID device in the FEE, ideally to room 208 or the FEE alcove.  Jana (or anyone else): do you know if there are two fiber pairs available to us for this, or if we should get something strung?

Ken Lauer writes:

"The Fast Wire Scanner system, which uses ATCA crates for beam-rate monitoring of positions from quadrature encoder signals, is currently led by@Balakrishnan, Namrata (AD).

The original author of the FPGA firmware was @Sapozhnikov, Leonid (TID).

The creator of the RTM board that splits the encoder signal was @Olsen, Jeff J. (TID)."

FEE Fiber Status

From Jing Yin (via email)

We have SM fiber trunks to support your needs. Please the FEE networking diagram below. The yellow colored 72 fiber trunks are SM.  36 of the 72 trunks are reserved for Jana for Cookie box.  We should have spares.  You may need Omar's help to patch the two SM fibers from FEE tunnel back to 208 server room.

Nov 8, 2023: Found rack 3 on the HXR of the FEE.  3 single-mode cassettes on BACK of rack (hard to access):  6 fiber pairs to B940-009-R06-FOD3-U1, 6 pairs to B940-009-R06-FOD3-U2, 6 pairs to B940-009-R06-FOD3-U3.

High-Rate Encoder Touch-Base (Aug 22, 2023)

Questions:

  • will send one number per shot
    • keep it simple for now (Larry/Namit have all encoder signals hooked up for future fancy behavior)
  • 32-bit or 64-bit number? (how many total ticks)
    • Zach thinks less than 4 billion ticks (maybe even less than 1 million)
    • Larry says that the firmware bandwidth usage doesn't change in either case
    • start with 32 bits for now to save daq disk space and network space
  • can start daq-integration now?
  • timestamping: done in kcu105 using the usual weaver/ruckman module
  • use pgp4 6Gbit/s for 
    • increased trigger jitter ok?  Larry says only 20-30ps RMS, so that's fine
  • handling limit/set-up/alarm signals
    • data sheet: https://jira.slac.stanford.edu/secure/attachment/30151/L-9517-9426-03-D_Data_sheet_TONiC_UHV_EN.pdf 
    • Nick says set-up is least important (says encoder readout is aligned: says that read-head is at the right distance from the strip with the ticks).  this should be a one-time setup, requires breaking vacuum to fix.  we think this information comes from some of the X,E,P,Q,Z signals.
    • P,Q are limit (are these the end of the readout strip?  X is setup, E is alarm.  Z is reference-mark (like a "home" position?)
      • Georgi feels reference-mark is important (although we should probably use the companion absolute encoder for this and for limit-protection)
      • Alarm feels important too (corrupts data, either because going too fast or signal level too low, so should never happen)
        • don't use an IOC: the DAQ will crash if the alarm goes off and absolute encoder takes care of limits/reference-mark
        • Larry will include all the bit (and a latched error bit (counter?)) in the 64-bit DAQ data word, but DAQ will nominally only use alarm bit
      • Larry counts alarms and reference-mark
      • Not clear if we have a reference mark or not: ideally would check.  Nick will email axilon to try to find out.  Could check also check empirically.
      • the companion absolute encoder can be used instead of the reference-mark.
      • Georgi thinks absolute encoder should protect the "limits"
    • we guess that it's the same for limit and alarm
    • might need to latch some of these (and make them clearable)
  • when does rix need interpolating absolute encoder?
    • 3 months (november?)
  • when does rix need high-rate relative encoder?
    • spring 2024
  • metastability:
    • Larry reports that the way the encoder works this should not be an issue
  • maximum speed (before skip-encoder-mark errors):

MHz Encoder Discussion (Oct 17, 2023)

cpo and Larry

Signals:

  • P,Q are limits
  • E is error (encoding error, typically from going too fast, or could be misalignment?)
  • A,B are for calculating position
  • Z is home

some get latched, there is a latch-clear

position is a signed integer. counter range is ~100 million.

errors have signal "E", counter and latch

GitHub: slaclab/high-rate-encoder-dev (devGui is there)

should we autohome (i.e. have firmware do it when it sees Z) or have scientists home it with software e.g. when absolute encoder is in the right place?  maybe doesn't matter: they might use absolute encoder?

event builder has blowoff, as usual

Installation To Do List

  • run 3 SM fiber pairs (2 plus a spare) from granite to R03 (Jyoti)
  • figure out kcu105 mounting structure on the granite (Jyoti)
  • figure out the power for kcu105 (Jyoti)
  • verify that Larry agrees connector and length are OK (Nick recommends less than 3 meters) (chris)
  • patch/test (mona)

FEE Installation Pictures

The connector to the FMC:

The single-mode patch panel:

Encoder Simulator

Chris Ford has an encoder simulator in lab3, with documentation here:  UDP Encoder Interface (rix Mono Encoder).  He also has committed lab3-caf-encoder.cnf to lcls2 GitHub in psdaq/psdaq/cnf/

High Rate Mono Encoder Interpolation Options

Goal: keep the parallelization happy (both ami and mpi-psana) so that every shot has an encoder value (either real or interpolated)

Note that the encoder can be used to generate XAS spectra with APD's, which are MHz devices.  That's why we need high rates.

  • ~5kHz of measured data is roughly the maximum readout rate
  • interpolation will make the data better (if the "noise" on the plot below is really noise, Georgi feels it is too high frequency to be real, but he's not certain)

Example motor motion:

Options:

  • DAQ interpolation.  Ugly DAQ C++, may want to change later.  Works for AMI
    • "freezes" the interpolation to what is done at DAQ time: but doesn't preclude someone going back and doing a better analysis later
    • what if what we think is noise is actually real vibration of the mono grating.  in that case interpolation loses important information.  one would test this by doing the analysis both with the raw data and with "smoothed" data to see which yields better physics resolution
  • only send events with measurements to AMI? requires a ric-style python script to select AMI events
    • limits statistics in AMI.  in principle could get 5kHz.
    • in future could avoid limited-statistics issue by going to 1MHz relative encoder
    • cpo worry: we already need such a script for RIX to send the andor events to AMI.  two issues:
      • that script becomes more complex trying to do two things
      • cpo thinks: if the (rare) andor events don't have an encoder value it will make the analysis (or ami display) tricky.  would need to have "trigger overlap"
  • (doesn't work for real-time analysis like ami) psana interpolation in-memory
    • on SMD0 core (would dramatically complicate our most complicated psana event-builder code)
    • psana "broadcasts" all the encoder data to all the cores.  quite messy and would affect performance unless SlowUpdate broadcast at 10Hz was feasible
  • (doesn't work for real-time analysis like ami) psana pre-processing interpolation written to a new xtc2 file
    • could also analyze the shape above and write out very little information (10 numbers?): regions 1 (flat),2(sloped),3(flat),4(sloped),5(flat). 
  • (doesn't work because of chaos caused by batching, deadtime, arbitrary number of cores, load-balancing) DAQ repeats without interpolation.
    • Ideally xtc2 would be able to say 3 (repeats 5 times) 7 (repeats 5 times).  xtc2 can't say this easily.
    • (preferred) or another version would be 3(deltatime=0),3(+deltatime),3(+deltatime),3,3,7,7,7,7,7,.... each core could do it's own interpolation.  provides flexibility and scalability.  extra data volume isn't too bad.  will make user code more complex (they will have to "buffer")
    • Chris Ford's thought on this "debinning":  Encoder Debinning
  • (too complex) treat the encoder as an integrating detector
    • won't work with ami which doesn't use Mona's batching
    • creates a lot of constraints with other integrating detectors:  their trigger rates have to be multiples of each other with no phase difference.

Toy Example of DAQ Interpolation Option


need matrix inversions (linear-regression for a polynomial fit) using eigen software? Example timestamps/values:

ts  1 2 3 4 5 6  7 (1MHz)
val 5   7   9   11 (100Hz)


multithreaded option

core 0 is handed event ts=2 (fit vals=[5,7] to get answer 6)

worry about:
1) when can we free the memory at the beginning of the circular buffer?
2) core 0 needs to wait for ts=3,val=7 data to show up before doing the fit

two possible drp versions:

  • high-rate drp hands out different events to different cores (e.g. 60)
  • simple drp code that is more "single-threaded"

what version is the encoder code? copied PVA detector perhaps (75% confident)?

single threaded option

if encoder is "simple" version above.

why single thread should be OK:

  • fits only need to be done at a low "real encoder value" rate (100Hz->5kHz) multi-threading isn't necessary.
  • polynomial (quadratic or 3rd-order?) calculations a+bx+cx**2+dx**3 (plus have to compute the time "x") have to be done at 1MHz.  hopefully can avoid multithreading as well, at least up to 100kHz

main thread algorithm:

  • wait for ts=3
  • launch fit subthread (or maybe don't  with ts=[1,3] to calculate result for ts=2 this addresses worry (2) above
  • main thread watches for completion of the fit-subthread and knows when to "delete" ts=1  (addresses worry (1) above)
  • main thread hands out ts=[3,5] for the next fit

starting plan:

  • standalone C++ code first to test 5kHz fits with eigen and 100kHz polynomial calculations
  • order of polynomial would be "#define" or command-line-option
  • another #define would be number of points to include in each linear-regression

Event Builder

Information we have for event-build:

  • asynchronous (not from XPM) timestamp generated by PLC
  • frame counter

Toy model:

  • one thread watches the KCU high-rate output
  • another low-rate thread watches for the UDP packet and puts it in a FIFO (single-ended queue).
    • Ric says maybe a single-entry queue?  cpo thinks that having depth is good, because it may take a while to readout values from the queue
  • the high-rate-thread pulls from the udp-queue
  • chris ford event builds using the encoder frame counter and matches it (event-builds) to the low-rate readout-group counter
  • this works even if a UDP packet from the encoder is dropped (we can mark damage)

Model details:

  • drp starts as a "high rate" drp
  • has a command-line argument for the "low rate" readout group.  Can't get it from the connect-json since tprtrig doesn't participate in collection (rate is set with -p or -r or -e option, we think)
  • whenever it sees a low-rate readout group bit in an event (using "env" from dgram.readoutGroups()) blocks on the UDP fifo looking for the corresponding event-counter
    • could we see the low-rate readout group number in the connect_json?  It's there, but can we figure out which one it is.
  • either matches, or times-out (but not with a timer, instead using later events or "disable" as described below)

Complexities:

  • deadtime can in principle turn the high-rate into a lower rate (but not lower than the low-rate UDP packets)
  • (lower priority, since it's a user error) what if someone inverts the high/low rate readout groups?  it's a user error, but what do we do?  maybe crashing is OK in this case?  note: hanging would make the problem difficult to debug
  • (low probability of happening) (an existing problem) making sure the we implement "clear readout" at the beginning of the run, and have to guarantee that the first UDP packet isn't dropped (or we have a way to reset the event counter to 0 at beginrun time)
  • XPM could drive the TPR in periodic "rhythmic patterns" of UDP packets so you could "learn" where you are in a pattern and detect early drops that way (unless you dropped a whole pattern, but that's unlikely)
  • (probably doesn't matter if we mess up the last UDP packet, so lean towards avoiding code complexity) handling endrun: what if we drop the last UDP packet?  how long do we wait?  could use disable, but maybe no guarantee that disable happens after the last UDP packet?  probably endrun is late enough?
  • (cpo would lean towards the two-thread model) question: could use one thread and non-blocking reads to the network socket (which is a FIFO)?  might make code simpler.  Chris Ford points out that current code is doing a "peek" at the UDP packet which is a similar idea.  Ric feels this might add complexity: "writing our own scheduler" (have to keep track of states of what's completed and errors).  Two-thread model might be easier to understand (two separate "boxes").
  • be aware: if, for example we run at 1MHz and 100Hz, then we have to hold on to 10,000 high-rate events before we can "time out" one dropped udp packet (longer if we drop multiple udp packets).  We should not use timers which I think add complexity: we should time-out with the next UDP packet or "disable/endrun".
  • (this is a bit of an academic point, just wanted think about it) how does deadtime work?  we want to make sure that the "low-rate udp buffering time" (setsockopt SO_RECVBUF and the depth of the FIFO) is longer than the "high-rate buffering time" so that the existing deadtime mechanism in the high rate stream "protects" the low-rate stream from buffer overflow.  But if we do overflow the udp buffers, we can handle the UDP drops correctly.

Starting point software: let's tweak encoderV1.  BLD might have code that deals with disparate rates (would need to talk with Matt).

Issues

  • how do we enable the low-rate readout group since the tpr process doesn't participate in the collection mechanism?  rough proposal: set in cnf file (so shared by both tpr and daq), timing system would program, and control would somehow display
  • currently caf needs the relative rates of the two readout groups.  would be nice if it could be "learned" somehow using the dgram env bits with readout group info, but deadtime can eliminate some of those events.

Interpolation Algorithm Iteration

Discussion on July 10, 2023

current algorithm:
- a udp packet comes in
- interpolate() runs at a low rate, is called which computes 349 points ("slowratio" param 350)
- puts high-rate results in SPSC queue ("interpolate" queue)
- process() wakes up when entry is on interpolate queue (polling) and pulls (runs at a high rate)
- pushes result to the pvQueue (not yet a datagram)
- _matchUp() running in the daq l1accept thread
  o pulls it from pvQueue and puts the data in the dgram (first time it really
    has a timestamp)

proposed revised algorithm:
- a udp packet comes in
- puts entry in a queue (either SPSC or pvQueue)
- check the env for low/high rate event, in L1Accept thread:
  - if low-rate: wait/poll for pvQueue
    o check that the frame-counter increments appropriately, maybe time it out if it's the last
      in the run.  set damage if time-out or frame-counter skips
  - if high-rate: interpolate using timestamp (have to cache the high-rate event until we get
    a "bounding" low-rate event: a memory buffer)

for comparison, pvadetector algorithm:
- process() runs in epics thread (not l1accept thread)
- gets a buffer from a free-list and fills it with PV data put on the pv-queue (already timestamped)
- require same number of free-list buffers as pebble-buffers (allows deadtime
  to protect the free-list buffers: need to get this idea into encoder)

protecting buffering:
- ideally size the pvQueue so it is "bigger" than the dmaQueue.  then deadtime protects
  the pvQueue from overflow.
  o pvQueue is a low-rate queue, while dmaQueue is high-rate
- 2044 dma buffers +10% (204) for extra firmware buffers
- then we need (2044+204)/350 pvQueue buffers

Resetting Interpolating Encoder Data

Discussion on Oct. 4, 2023 with caf/claus/weaver/cpo

currently: configure launches worker, and the worker "remembers" by default
but Ric currently resets the worker memory on enable (which is good!)

should we "forget" old data under these conditions:
- new run?
- new step?
- enable?
- integrating detector pause? (hard)

propose:
- if it's not too hard,    "forget" on every enable (or disable) (fallback: beginstep/ends\
tep)
  o doesn't handle the integ-det-pause cause, but that's hard and
    we will make that clear to the scientists (eventual    solution may    be 1MHz    encoder)
- turns out ric already does this


  • No labels