Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The setup for BLD testing is as follows

Image Modified


The crate contains 3 application boards: slot 3,4, and 6. All of them are configured with the AmcCarrierTpr firmware. The linuxRT server contains 6 processors, and is running three instantiations of the AmcCarrierTprTst IOC (bsssBld branch); one corresponding to each of the application boards. This IOC has BSSS/BLD/BSAS/BSA integrated, and ready to go.

...

These results were obtained by camonitor. The true number of occurrences is equal or larger than this number.

BLD test with different number of channels

BLD EDEF configurations were as follows:

Code Block
languagebash
themeRDark
caput TST:SYS2:3:EDEF4:MULT_PORT 50000
caput TST:SYS2:3:EDEF4:MULT_ADDR 239.255.4.3
caput TST:SYS2:3:EDEF3:MULT_ADDR 239.255.4.3
caput TST:SYS2:3:EDEF3:MULT_PORT 50000
caput TST:SYS2:3:SCSBR:MULT_ADDR 239.255.4.3
caput TST:SYS2:3:SCSBR:MULT_PORT 50000
caput TST:SYS2:3:SCHBR:MULT_PORT 50000
caput TST:SYS2:3:SCHBR:MULT_ADDR 134.79.216.240

When running all four EDEFs on the possible maximum rate (71.5KHz), the BLD thread CPU usage is as low as %4. While the IOC total CPU usage is %40 when BLD is activated and %20 when BLD is disabled. %20 can be attributed to BLD. While we are not sure, Kukhee and I presume that the remaining %16 (20-4) is probably spent in the kernel when transmitting the data over UDP. 

Of course testing with 1MHz rate generates a lot of overruns, and we catch the BLD thread sleeping using PS (wchan is set to sock_alloc_send_pskb). We suspect that while the link is the bottleneck, the kernel spends a lot of time sending the UDP packets

The following table all 4 EDEFs were enabled at 1MHz

# of channels

Estimated BLD required upstream bandwidthpass fail
3736MbpsPass
4858MbpsPass
5981MbpsFail

There is sufficient reason to believe that the network interface is the bottleneck and not the CPU as the failure happens when 1Gbps is reached.