Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

From Jing Yin (via email)

We have SM fiber trunks to support your needs. Please the FEE networking diagram below. The yellow colored 72 fiber trunks are SM.  36 of the 72 trunks are reserved for Jana for Cookie box.  We should have spares.  You may need Omar's help to patch the two SM fibers from FEE tunnel back to 208 server room.

Image Modified

Encoder Simulator

Chris Ford has an encoder simulator in lab3, with documentation here:  UDP Encoder Interface (rix Mono Encoder).  He also has committed lab3-caf-encoder.cnf to lcls2 GitHub in psdaq/psdaq/cnf/

...

Code Block
ts  1 2 3 4 5 6  7 (1MHz)
val 5   7   9   11 (100Hz)


multithreaded option

core 0 is handed event ts=2 (fit vals=[5,7] to get answer 6)

...

  • standalone C++ code first to test 5kHz fits with eigen and 100kHz polynomial calculations
  • order of polynomial would be "#define" or command-line-option
  • another #define would be number of points to include in each linear-regression

Event Builder

Information we have for event-build:

  • asynchronous (not from XPM) timestamp generated by PLC
  • frame counter

Toy model:

  • one thread watches the KCU high-rate output
  • another low-rate thread watches for the UDP packet and puts it in a FIFO (single-ended queue).
    • Ric says maybe a single-entry queue?  cpo thinks that having depth is good, because it may take a while to readout values from the queue
  • the high-rate-thread pulls from the udp-queue
  • chris ford event builds using the encoder frame counter and matches it (event-builds) to the low-rate readout-group counter
  • this works even if a UDP packet from the encoder is dropped (we can mark damage)

Model details:

  • drp starts as a "high rate" drp
  • has a command-line argument for the "low rate" readout group.  Can't get it from the connect-json since tprtrig doesn't participate in collection (rate is set with -p or -r or -e option, we think)
  • whenever it sees a low-rate readout group bit in an event, blocks on the UDP fifo looking for the corresponding event-counter
  • either matches, or times-out (but not with a timer, instead using later events or "disable" as described below)

Complexities:

  • deadtime can in principle turn the high-rate into a lower rate (but not lower than the low-rate UDP packets)
  • (lower priority, since it's a user error) what if someone inverts the high/low rate readout groups?  it's a user error, but what do we do?  maybe crashing is OK in this case?  note: hanging would make the problem difficult to debug
  • (low probability of happening) (an existing problem) making sure the we implement "clear readout" at the beginning of the run, and have to guarantee that the first UDP packet isn't dropped (or we have a way to reset the event counter to 0 at beginrun time)
  • XPM could drive the TPR in periodic "rhythmic patterns" of UDP packets so you could "learn" where you are in a pattern and detect early drops that way (unless you dropped a whole pattern, but that's unlikely)
  • (probably doesn't matter if we mess up the last UDP packet, so lean towards avoiding code complexity) handling endrun: what if we drop the last UDP packet?  how long do we wait?  could use disable, but maybe no guarantee that disable happens after the last UDP packet?  probably endrun is late enough?
  • (cpo would lean towards the two-thread model) question: could use one thread and non-blocking reads to the network socket (which is a FIFO)?  might make code simpler.  Chris Ford points out that current code is doing a "peek" at the UDP packet which is a similar idea.  Ric feels this might add complexity: "writing our own scheduler" (have to keep track of states of what's completed and errors).  Two-thread model might be easier to understand (two separate "boxes").
  • be aware: if, for example we run at 1MHz and 100Hz, then we have to hold on to 10,000 high-rate events before we can "time out" one dropped udp packet (longer if we drop multiple udp packets).  We should not use timers which I think add complexity: we should time-out with the next UDP packet or "disable/endrun".
  • (this is a bit of an academic point, just wanted think about it) how does deadtime work?  we want to make sure that the "low-rate udp buffering time" (setsockopt SO_RECVBUF and the depth of the FIFO) is longer than the "high-rate buffering time" so that the existing deadtime mechanism in the high rate stream "protects" the low-rate stream from buffer overflow.  But if we do overflow the udp buffers, we can handle the UDP drops correctly.

Starting point software: let's tweak encoderV1.  BLD might have code that deals with disparate rates (would need to talk with Matt).