Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
title2024-01-04 email from Chris
collapsetrue
2024-01-04 1:22pm
Li, Yee Ting;
Kroeger, Wilko
Hi Mikhail, (cc: Yee, Wilko since they may have an interest in this)

When you have a chance, could you look at the scaling behavior of this script.
It tries to "prefetch" the data from disk (for 100 events per core) with det.raw,
and then times the behavior of det.calib:

~cpo/problems/cxix1000021_scaling/cxix1000021_scaling.py

To do this get exclusive access to a batch node with this command:

srun --partition milano --account lcls:prjdat21 -n 1 --time=01:00:00 --exclusive --pty /bin/bash

The in another window ssh to the node that you have been allocated
(the reason this is necessary is obscure: it has to do with an incompatible
version of pmix used by srun that our mpi is unhappy with).
In that new window run these two commands:

mpirun -n 1 python cxix1000021_scaling.py
mpirun -n 80 python cxix1000021_scaling.py

For me the first one shows about .11s per event while the second shows about .25s per event,
which is the poor scaling behavior we would like to understand.
You can see logfiles in ~cpo/problems/cxix1000021_scaling in interactive_1core.log and interactive_80core.log.

I think the fundamental question that I hope you could answer is:
do all numpy operations in det.calib slow down on 80 cores,
or is there some other cause for the poor scaling?
From discussions we've had with Yee we suspect the poor scaling may
be due to behavior of the hardware, but we don't know for sure.

Happy to discuss this with you on zoom at any time, of course.
Thanks for looking at this!

chris

p.s. following a suggestion by someone (Yee? Wilko?)
I watched the cpu clock speeds in /proc/cpuinfo while the 80 core job was running.
To my eye they seemed to vary between ~2.4MHz and ~3.0Mhz which didn't feel like quite enough

Test environment and commands

List of commands for this test

code
Code Block
titlecommands for test
collapsetrue
> s3dflogin
> psana
srun --partition milano --account lcls:prjdat21 -n 1 --time=01:00:00 --exclusive --pty /bin/bash
Ex:
ana-4.0.58-py3 [dubrovin@sdfiana001:~/LCLS/con-py3]$ srun --partition milano --account lcls:prjdat21 -n 1 --time=01:00:00 --exclusive --pty /bin/bash
srun: job 37602834 queued and waiting for resources
srun: job 37602834 has been allocated resources
In .bashrc on host sdfiana001
In .bash_login
ana-4.0.58-py3 [dubrovin@sdfmilan031:~/LCLS/con-py3]$

grab example from Chris:
~cpo/problems/cxix1000021_scaling/cxix1000021_scaling.py
and modify it as explained below.

In other window
> s3dflogin
ssh -Y sdfmilan031
cd LCLS/con-py3
source /sdf/group/lcls/ds/ana/sw/conda1/manage/bin/psconda.sh
source /sdf/group/lcls/ds/ana/sw/conda1/manage/bin/setup_testrel

mpirun -n 1 python test-cxix1000021_scaling.py
mpirun -n 80 python test-cxix1000021_scaling.py

...