Confluence will be unusable 23-July-2024 at 06:00 due to a Crowd upgrade.
The setup for BLD testing is as follows
The crate contains 3 application boards: slot 3,4, and 6. All of them are configured with the AmcCarrierTpr firmware. The linuxRT server contains 6 processors, and is running three instantiations of the AmcCarrierTprTst IOC (bsssBld branch); one corresponding to each of the application boards. This IOC has BSSS/BLD/BSAS/BSA integrated, and ready to go.
...
These results were obtained by camonitor. The true number of occurrences is equal or larger than this number.
BLD EDEF configurations were as follows:
Code Block | ||||
---|---|---|---|---|
| ||||
caput TST:SYS2:3:EDEF4:MULT_PORT 50000
caput TST:SYS2:3:EDEF4:MULT_ADDR 239.255.4.3
caput TST:SYS2:3:EDEF3:MULT_ADDR 239.255.4.3
caput TST:SYS2:3:EDEF3:MULT_PORT 50000
caput TST:SYS2:3:SCSBR:MULT_ADDR 239.255.4.3
caput TST:SYS2:3:SCSBR:MULT_PORT 50000
caput TST:SYS2:3:SCHBR:MULT_PORT 50000
caput TST:SYS2:3:SCHBR:MULT_ADDR 134.79.216.240 |
When running all four EDEFs on the possible maximum rate (71.5KHz), the BLD thread CPU usage is as low as %4. While the IOC total CPU usage is %40 when BLD is activated and %20 when BLD is disabled. %20 can be attributed to BLD. While we are not sure, Kukhee and I presume that the remaining %16 (20-4) is probably spent in the kernel when transmitting the data over UDP.
Of course testing with 1MHz rate generates a lot of overruns, and we catch the BLD thread sleeping using PS (wchan is set to sock_alloc_send_pskb). We suspect that while the link is the bottleneck, the kernel spends a lot of time sending the UDP packets
The following table all 4 EDEFs were enabled at 1MHz
# of channels | Estimated BLD required upstream bandwidth | pass fail |
---|---|---|
3 | 736Mbps | Pass |
4 | 858Mbps | Pass |
5 | 981Mbps | Fail |
There is sufficient reason to believe that the network interface is the bottleneck and not the CPU as the failure happens when 1Gbps is reached.