A snapshot of the FEE alcove DAQ cabling in late August 2022 in order to prepare for re-cabling to SRCF.
Fodu1 is the top fodu in the alcove (18 ports per cassette, except for right-most cassette which has 12). Fodu2 is the lower one (18 ports per cassette). Cassette port numbers count 1-12 (or 1-18) from left to right, across the row of 6, then down to the next row like this:
1,2,3,4,5,6
7,8,9,10,11,12
13,14,15,16,17,18
Multi-fiber breakout-cable fiber numbers count from 1.
Other non-fiber connections to drp and ctl nodes:
- cmp010: xtcav copper network cable from atca switch
- cmp015: encoder copper network cable from atca switch
- ctl002: copper network cable from atca switch
- ctl002: evr card with encoder trigger, and timing fiber from 1,3,12
- network connection from encoder to atca switch (copper?)
Key questions:
- handling timing system connections
- two xpm fanouts in srcf, one for rix one for tmo (currently have two in the fee, one is xtpg one is ). eventually one for txi as well. Needed for camlink-converter nodes and timing-system nodes. Those nodes will become hutch-specific.
- should setup xpm in room 208?
- maybe use FEE alcove as new test stand (don't have physical proximity). maybe eventually use a KCU as a small-xpm in lab3.
- summary:
- per-hutch fanouts in srcf (matt)
- hutch-master xpms in each hutch
- master xpm in 208 (matt)
- a teststand xpm in fee alcove (the currently unused xpm1)
- (lower priority) KCU small-xpm in lab3.
- tmo uses xpm 0,2,4. rix uses 3. xpm0 is the xtpg.
- mono encoder network
- I think it comes in on copper, so we have to somehow convert to fiber and run the fiber over to srcf and convert back to copper somehow if we don't have another optical network xface.
- xtcav sharing
- currently the timing for this comes from TMO xpm2 in the TMO hutch, which means it can't be used by RIX. We really need the BOS to switch the timing cable so xtcav can be used by both RIX and TMO.
- cable in SRCF for 120Hz or 1MHz
- my inclination is to cable for 1MHz so we don't recable again for higher rates. uses up more resources and requires bigger cnf changes.
- a KCU has a theoretical limit of 8GB/s for current PGP. Aim for 6GB/s per node?
- ctl machines
- ctl002 is unusual: mono encoder tpr, matt's special network to the xpm switch
- tpr stuff needs to stay in FEE alcove because of encoder TTL trigger cable
- ctl002 hosts the hsd pva processes
- move hsd pva processes to mon/eb node in srcf or TMO nodes? (because these don't have KCUs unlike cmp nodes). TMO node (e.g. daq-tmo-hsd-01) feels better. Maybe Ric can try when he is able? Existence proof with peppex hsds.
- ctl002 is unusual: mono encoder tpr, matt's special network to the xpm switch
- ric asks about control of atca crate in srcf:
- matt says the crate need to be on the atca private network. he'll plug the srcf atca switch into that network.
drp-neh-cmpNNN | cnf Function | Left KCU Fibers (near USB port) (FODU#, Cassette#, Port#) | Right KCU fibers (away from USB port) |
---|---|---|---|
001 | tmo timing (timing, bld, epics, im2k0, im5k4) | None | 1,2,7 (datadev1) |
002 | rix timing (exs manta) | None | 1,2,8 (datadev1) |
003 | tmo ami | None | xpm0 |
004 | (in lab3?) | ||
005 | tmo opal (atm) | xpm0 | fiber 1: 1,2,1 |
006 | (in lab3?) | ||
007 | tmo/rix fims and user meb's for tmo/rix | fiber 4: 1,3,2 fiber 2: 1,3,1 fiber 1: 1,3,3 | fiber 2: 1,2,6 fiber 3: 1,2,5 |
008 | (in lab3?) | ||
009 | rix hsd_2, hsd_3 | 1,3,18 (datadev0) | 1,3,15 (datadev1) |
010 | timing (xtcav) | None | 1,3,14 (datadev1) |
011 | tmo opal (opal1) | xpm0 | 1,1,13 |
012 | tmo opal (opal2, fzp) | xpm0 | fiber 1 (data): 1,2,3 fiber 3 (data): 1,2,12 |
013 | rix opal (atm) | fiber 2 (timing): 1,3,6 | fiber 1 (data): 1,3,5 |
014 | tmo epix and rix teb | don't care | don't care |
015 | rix timing (timing, bld, andor_dir, andor_vls, epics, encoder mr2k1) | None | 1,2,10 (datadev1) |
016 | tmo teb | None | None |
017 | tmo hsd_3 | 1,1,11 (datadev0) | 1,1,9 (datadev1) |
018 | tmo hsd_5, hsd_7 and rix ami manager | 1,1,1 (datadev0) | 1,1,5 (datadev1) |
019 | tmo hsd_4 | 1,1,4 (datadev0) | 1,1,6 (datadev1) |
020 | tmo hsd_1, hsd_2 | 1,1,12 (datadev0) | 1,1,8 (datadev1) |
021 | tmo hsd_6 | 1,1,3 (datadev0) | 1,1,2 (datadev1) |
022 | tmo hsd_8, hsd_9 | 1,1,7 (datadev0) | 1,1,10 (datadev1) |
023 | rix hsd_0, hsd_1 | 1,3,8 (datadev0) | 1,3,9 (datadev1) |
024 | hsd_peppex_0 hsd_peppex_1 | 2,4,3 (datadev0) | 2,4,4 (datadev1) |
024 | tmo hsd_10, hsd_11 | 1,1,18 (datadev0?) | 1,1,16 (datadev1?) |
fee-xpm0 amc0 port 0 | xpm3 in rix | 1,2,4 |
Fee teststand
On March 1 2024 cpo powered off these nodes to reduce heat load, since part of rittal rack cooling is broken:
1007 /reg/common/tools/bin/psipmi drp-neh-cmp007 power off 1010 /reg/common/tools/bin/psipmi drp-neh-cmp009 power off 1013 /reg/common/tools/bin/psipmi drp-neh-cmp011 power off 1015 /reg/common/tools/bin/psipmi drp-neh-cmp012 power off 1017 /reg/common/tools/bin/psipmi drp-neh-cmp013 power off 1019 /reg/common/tools/bin/psipmi drp-neh-cmp014 power off 1021 /reg/common/tools/bin/psipmi drp-neh-cmp018 power off 1023 /reg/common/tools/bin/psipmi drp-neh-cmp019 power off 1025 /reg/common/tools/bin/psipmi drp-neh-cmp020 power off 1027 /reg/common/tools/bin/psipmi drp-neh-cmp021 power off 1029 /reg/common/tools/bin/psipmi drp-neh-cmp022 power off
- document the function of each node
result from cat /proc/datadev_0
node | Build String | Firmware | connected to | Comment |
cmp001 | DrpTDet: Vivado v2019.1, rdsrv301 (x86_64), Built Fri 12 Jun 2020 10:00:33 AM PDT by weaver | 0x9 | AMC1 port 0 | |
---|---|---|---|---|
cmp002 | DrpTDet: Vivado v2019.1, rdsrv301 (x86_64), Built Fri 12 Jun 2020 10:00:33 AM PDT by weaver | 0x9 | AMC1 port 1 | |
cmp003 | ClinkKcu1500Pgp2b: Vivado v2020.1, rdsrv302 (x86_64), Built Fri 24 Jul 2020 11:12:11 AM PDT by ruckman | 0x4090000 | epixM | |
cmp005 | ClinkKcu1500Pgp2b: Vivado v2021.2, rdsrv317 (Ubuntu 20.04.3 LTS), Built Thu 02 Dec 2021 01:31:54 PM PST by ruckman | 0x7110000 | Camlink node | |
cmp007 | XilinxKcu1500Pgp4_6Gbps: Vivado v2023.1, rdsrv405 (Ubuntu 20.04.6 LTS), Built Thu 15 Feb 2024 07:21:18 PM PST by ruckman | 0x2050000 | Wave8 (Controls) | |
cmp009 | XilinxKcu1500Pgp4_6Gbps: Vivado v2023.1, rdsrv405 (Ubuntu 20.04.6 LTS), Built Thu 15 Feb 2024 07:21:18 PM PST by ruckman | 0x2050000 | Wave8 | |
cmp010 | DrpTDet: Vivado v2019.1, rdsrv301 (x86_64), Built Fri 12 Jun 2020 10:00:33 AM PDT by weaver | 0x9 | AMC1 port 2 | |
cmp011 | ClinkKcu1500Pgp2b: Vivado v2021.2, rdsrv317 (Ubuntu 20.04.3 LTS), Built Thu 02 Dec 2021 01:31:54 PM PST by ruckman | 0x7110000 | ||
cmp012 | ClinkKcu1500Pgp2b: Vivado v2021.2, rdsrv317 (Ubuntu 20.04.3 LTS), Built Thu 02 Dec 2021 01:31:54 PM PST by ruckman | 0x7110000 | ||
cmp013 | ClinkKcu1500Pgp2b: Vivado v2021.2, rdsrv317 (Ubuntu 20.04.3 LTS), Built Thu 02 Dec 2021 01:31:54 PM PST by ruckman | 0x7110000 | ||
cmp014 | Lcls2EpixHrXilinxKcu1500Pgp4_6Gbps: Vivado v2021.1, rdsrv303 (Ubuntu 20.04.3 LTS), Built Wed 08 Sep 2021 02:37:45 PM PDT by ruckman | 0x1020000 | ||
cmp015 | DrpTDet: Vivado v2019.1, rdsrv301 (x86_64), Built Fri 12 Jun 2020 10:00:33 AM PDT by weaver | 0x9 | AMC1 port 3 | |
cmp017 | DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver | 0x6 | ||
cmp018 | DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver | 0x6 | ||
cmp019 | DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver | 0x6 | ||
cmp020 | DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver | 0x6 | ||
cmp021 | DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver | 0x6 | ||
cmp022 | DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver | 0x6 | ||
cmp023 | DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver | 0x6 | HSD Readout | |
cmp024 | DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver | 0x6 | Container HSD |
- resurrect XPM (Chris and Matt)
- connect atca network
- assign XPM name
- start a pyxpm process in /cds/group/pcds/dist/pds/tmo/scripts/neh-base.cnf
- ask Matt about XPM IP
- run a fiber from the FEE ATCA switch to room 208 ATCA switch (verify with Matt before doing it)
- Connect multi mode timing fibers between XPM and appropriate nodes (drp-neh-cmpNNN 001-024)
- Create and run a .cnf file with one timing system node
- Measure optical powers on fibers
- Remove all fibers
- coming from a FEE cmp nodes,
- from FEE XPM to patch panel
- make sure weka file system is writable (Omar in case is not)
- run multi mode fiber from XPM to each timing system nodes
Overview
Content Tools