Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Stefano is looking into the cusz performance issues.  With LC-GPU gets 60GB/s with 4 streams with 1 segment, and 6GB/s.  Two questions:
    • why does performance scale better than the number of streams?
    • why is 1-stream 1-segment cusz so much worse (0.6GB/s) than LC-GPU (6GB/s)?
    • some possible reasons that were suggested: compiler options in spack/conda? timing calculation incorrect for LC? error in the splitting up of the data into single-segments?
    • could look at the performance in the profiler, although this will underestimate the eventual performance because of profiler overhead.
  • next priorities for Stefano: see if we can improve angular integration performance to 50GB/s without batching events (which we can do because the outputs are "separable" into events, but it adds complexity).  Note that SZ compression with batches of events is NOT "separable".  Another project is the peak-finding performance with peakfinder8 in pyFAI.

Jan. 13, 2025

  • Valerio is going to move psana2 on s3df to spack in the next few weeks
  • Ric has the "graphs" approach to kernel launching is working.  Tracking down a tricky segfault after 300 events.
  • Stefano working on streams.  Having trouble reproducing previous compilation: LC is broken with spack (unhappy with flags).  Getting advice from Gabriel and Valerio.  Looks like old versions of compiler are being picked up (gcc4).  Valerio and Gabriel provided guidance for how to fix that.

Jan. 21, 2025

  • Ric has DAQ running robustly, albeit only at 10kHz at the moment.  fixed out-of-order event issue.  Ric worries about cpu→gpu communication, or 3 kernel launch (with graphs).  Currently 4 streams.  Will try profiler.
    • Ric found a way to run cuda graphs without sudo, and perhaps that is impacting performance?  Ric found that the "handshake word" can be cleared by using a cuda kernel instead of a cuda API call.  There is also a second "write enable" register that needs to written to on every event.  Clearing of handshake is does at end of one of kernels (which is communicating with the cpu) and then the driver api can also be called at that point from the cpu.
  • Jeremy talked about having 1 CPU-thread per GPU-stream (that's how the coda-graph test program was written).  Changed to have all GPU-streams handled by 1 thread (to solve the event-ordering problem).  Could this affect performance?  Will look at the profiler output for this.  Ric has an idea for how to do multi-cpu-thread-again, but does complicate the code (and more task switches?).  And this issue crops up "per KCU" and so will get worse with multiple KCU's. (e.g. 4 KCU's each with 4 GPU-streams would give us 16 threads).  Scales poorly with more DMA buffers.
  • Stefano identified a bug in the rate calculation.  Now back to getting 60GB/s with LC for 1 segment 352x384 (single-precision) with 4 streams which is great news.
  • cpo points out that we could perhaps batch over events and reuse the "integrating detector" idea (roughly) in psana, if necessary.
  • Ric may be ready to have Gabriel launch his calibration kernel in the gpu-branch of the code
  • Ric worries that merging the gpu-branch could disrupt the main-branch (MemPool in cpu drp's, now broken into two pieces: MemPoolCpu and MemPoolGpu that inherit from MemPool).  Ric thinks it could be OK, but we need to try.  We should run rix or tmo out of Ric's area.

Jan. 28, 2025

  • Ric tried with the GPU dma-buffer release and that is working.   Switched to atomic operations away from volatile.  Rate has been boosted from 10kHz to 15kHz with all the latest improvements.  Ric has a "multiple buffer" idea: allows cpu teb-packet sending to proceed in parallel with gpu processing (and an early buffer release): eliminated loop over pcie transactions.  Profiler is still on the list to understand 15kHz limit.  Still have a "sudo" issue that needs to be addressed.  Could we use "capabilities" to have processes run with privilege?  Depends on the filesystem where executables reside.  Could use some advice on sudo issue, but not the highest priority at the moment.
  • Stefano played around with threads per block, but always gets about 60GB/s for 1 segment.  Still waiting for cuSZ to catch up to LC.  Stefano feels blocked waiting for Robert/Jiannan to make progress on streams and Huffman-tree changes, reusing Huffman-tree across events ... texture memory is read-only global memory and so a little faster ... use for calibration constants? Peakfinder8 from pyFAI and perhaps look at angular integration performance again with the profiler?  Why is LC faster than angular integration?
  • Stefano will post updated LC results (including threads/block study and profiler output) here: Benchmarks