Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Stefano has successfully installed the latest cuSZ, but the results are puzzling and not reproducible.  This is all with Robert's libpressio (python interface to different algorithms).  SZ3 on CPU (0.14GB/s). If data is on CPU cuSZ is .45GB/s and if it's on GPU then  0.58GB/s.  Seems significantly slower than LC (tested without libpressio previously).  Using a large 4Mpx image.  LC on CPU with pressio is 0.6GB/s.
  • Jiannan/Robert say time can be saved by reusing Huffman-encoding tree for every event (partially computed on the CPU?).  Assumes images are roughly the same (some risk, e.g. with ice formation for crystallography)
  • Can do cuSZ measurements with/without pressio, but easier to do with it.
  • Stefano should put all his results (and instructions for reproducing) on the Benchmarks confluence page.  pyFAI, custom angular integration, LC, cuSZ (and SZ3 on CPU).  With/without cuda streams.
  • Difficulties installing cuSZ with spack: major problem was rhel7.  gcc/git were very old.  Robert figured out how to call more modern pre-compiled gcc inside spack.
    • Valerio had to patch many packages to get spack working on rhel7
    • conda is also having problems with rhel7
    • Gabriel did some fancy stuff compiled his own glibc with a more modern compiler
    • "the end is coming"
    • spack works naturally on s3df (rhel8), so psana is fine.  just the daq is a problem.
    • going forward: try to use spack on rocky9 only (unless rhel7 works trivially using Valerio's existing package-patching work)
    • feels like we update gpu003 to rocky9 (has a kcu, but no IB) leave gpu004 as rhel7 so Stefano can complete his measurements
    • make new H100 node rocky9
    • Valerio does fancy stuff for libnl for rdma-core (he rebuilt this with conda because we used more modern compilers with different ABI).  maybe we don't need to do with rocky9/spack?  just reuse the system libnl rdma-core, hopefully (spack supports reuse of system libraries better than conda).
  • Waiting to get in touch with weka about Gabriel's cuFile result

Jan. 6, 2025

  • Stefano is looking into the cusz performance issues.  With LC-GPU gets 60GB/s with 4 streams with 1 segment, and 6GB/s.  Two questions:
    • why does performance scale better than the number of streams?
    • why is 1-stream 1-segment cusz so much worse (0.6GB/s) than LC-GPU (6GB/s)?
    • some possible reasons that were suggested: compiler options in spack/conda? timing calculation incorrect for LC? error in the splitting up of the data into single-segments?
    • could look at the performance in the profiler, although this will underestimate the eventual performance because of profiler overhead.
  • next priorities for Stefano: see if we can improve angular integration performance to 50GB/s without batching events (which we can do because the outputs are "separable" into events, but it adds complexity).  Note that SZ compression with batches of events is NOT "separable".  Another project is the peak-finding performance with peakfinder8 in pyFAI.

Jan. 13, 2025

  • Valerio is going to move psana2 on s3df to spack in the next few weeks
  • Ric has the "graphs" approach to kernel launching is working.  Tracking down a tricky segfault after 300 events.
  • Stefano working on streams.  Having trouble reproducing previous compilation: LC is broken with spack (unhappy with flags).  Getting advice from Gabriel and Valerio.  Looks like old versions of compiler are being picked up (gcc4).  Valerio and Gabriel provided guidance for how to fix that.

Jan. 21, 2025

  • Ric has DAQ running robustly, albeit only at 10kHz at the moment.  fixed out-of-order event issue.  Ric worries about cpu→gpu communication, or 3 kernel launch (with graphs).  Currently 4 streams.  Will try profiler.
    • Ric found a way to run cuda graphs without sudo, and perhaps that is impacting performance?  Ric found that the "handshake word" can be cleared by using a cuda kernel instead of a cuda API call.  There is also a second "write enable" register that needs to written to on every event.  Clearing of handshake is does at end of one of kernels (which is communicating with the cpu) and then the driver api can also be called at that point from the cpu.
  • Jeremy talked about having 1 CPU-thread per GPU-stream (that's how the coda-graph test program was written).  Changed to have all GPU-streams handled by 1 thread (to solve the event-ordering problem).  Could this affect performance?  Will look at the profiler output for this.  Ric has an idea for how to do multi-cpu-thread-again, but does complicate the code (and more task switches?).  And this issue crops up "per KCU" and so will get worse with multiple KCU's. (e.g. 4 KCU's each with 4 GPU-streams would give us 16 threads).  Scales poorly with more DMA buffers.
  • Stefano identified a bug in the rate calculation.  Now back to getting 60GB/s with LC for 1 segment 352x384 (single-precision) with 4 streams which is great news.
  • cpo points out that we could perhaps batch over events and reuse the "integrating detector" idea (roughly) in psana, if necessary.
  • Ric may be ready to have Gabriel launch his calibration kernel in the gpu-branch of the code
  • Ric worries that merging the gpu-branch could disrupt the main-branch (MemPool in cpu drp's, now broken into two pieces: MemPoolCpu and MemPoolGpu that inherit from MemPool).  Ric thinks it could be OK, but we need to try.  We should run rix or tmo out of Ric's area.

Jan. 28, 2025

  • Ric tried with the GPU dma-buffer release and that is working.   Switched to atomic operations away from volatile.  Rate has been boosted from 10kHz to 15kHz with all the latest improvements.  Ric has a "multiple buffer" idea: allows cpu teb-packet sending to proceed in parallel with gpu processing (and an early buffer release): eliminated loop over pcie transactions.  Profiler is still on the list to understand 15kHz limit.  Still have a "sudo" issue that needs to be addressed.  Could we use "capabilities" to have processes run with privilege?  Depends on the filesystem where executables reside.  Could use some advice on sudo issue, but not the highest priority at the moment.
  • Stefano played around with threads per block, but always gets about 60GB/s for 1 segment.  Still waiting for cuSZ to catch up to LC.  Stefano feels blocked waiting for Robert/Jiannan to make progress on streams and Huffman-tree changes, reusing Huffman-tree across events ... texture memory is read-only global memory and so a little faster ... use for calibration constants? Peakfinder8 from pyFAI and perhaps look at angular integration performance again with the profiler?  Why is LC faster than angular integration?
  • Stefano will post updated LC results (including threads/block study and profiler output) here: Benchmarks