You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 26 Next »

A snapshot of the FEE alcove DAQ cabling in late August 2022 in order to prepare for re-cabling to SRCF.

Fodu1 is the top fodu in the alcove (18 ports per cassette, except for right-most cassette which has 12).  Fodu2 is the lower one (18 ports per cassette).  Cassette port numbers count 1-12 (or 1-18) from left to right, across the row of 6, then down to the next row like this:

1,2,3,4,5,6

7,8,9,10,11,12

13,14,15,16,17,18

Multi-fiber breakout-cable fiber numbers count from 1.

Other non-fiber connections to drp and ctl nodes:

  • cmp010: xtcav copper network cable from atca switch
  • cmp015: encoder copper network cable from atca switch
  • ctl002: copper network cable from atca switch
  • ctl002: evr card with encoder trigger, and timing fiber from 1,3,12
  • network connection from encoder to atca switch (copper?)

Key questions:

  • handling timing system connections
    • two xpm fanouts in srcf, one for rix one for tmo (currently have two in the fee, one is xtpg one is ).  eventually one for txi as well.  Needed for camlink-converter nodes and timing-system nodes.  Those nodes will become hutch-specific.
    • should setup xpm in room 208?
    • maybe use FEE alcove as new test stand (don't have physical proximity).  maybe eventually use a KCU as a small-xpm in lab3.
    • summary:
      1. per-hutch fanouts in srcf (matt)
      2. hutch-master xpms in each hutch
      3. master xpm in 208 (matt)
      4. a teststand xpm in fee alcove (the currently unused xpm1)
      5. (lower priority) KCU small-xpm in lab3.
    • tmo uses xpm 0,2,4.  rix uses 3.  xpm0 is the xtpg. 
  • mono encoder network
    • I think it comes in on copper, so we have to somehow convert to fiber and run the fiber over to srcf and convert back to copper somehow if we don't have another optical network xface.
  • xtcav sharing
    • currently the timing for this comes from TMO xpm2 in the TMO hutch, which means it can't be used by RIX.  We really need the BOS to switch the timing cable so xtcav can be used by both RIX and TMO.
  • cable in SRCF for 120Hz or 1MHz
    • my inclination is to cable for 1MHz so we don't recable again for higher rates.  uses up more resources and requires bigger cnf changes.
    • a KCU has a theoretical limit of 8GB/s for current PGP.  Aim for 6GB/s per node? 
  • ctl machines
    • ctl002 is unusual: mono encoder tpr, matt's special network to the xpm switch
      • tpr stuff needs to stay in FEE alcove because of encoder TTL trigger cable
    • ctl002 hosts the hsd pva processes
      • move hsd pva processes to mon/eb node in srcf or TMO nodes? (because these don't have KCUs unlike cmp nodes).  TMO node (e.g. daq-tmo-hsd-01) feels better.  Maybe Ric can try when he is able?  Existence proof with peppex hsds.
  • ric asks about control of atca crate in srcf:
    • matt says the crate need to be on the atca private network. he'll plug the srcf atca switch into that network.
drp-neh-cmpNNNcnf FunctionLeft KCU Fibers (near USB port)
(FODU#, Cassette#, Port#)

Right KCU fibers (away from USB port)
(FODU#, Cassette#, Port#)

001tmo timing (timing, bld, epics, im2k0, im5k4)None1,2,7 (datadev1)
002rix timing (exs manta)None1,2,8 (datadev1)
003tmo amiNonexpm0
004(in lab3?)

005tmo opal (atm)xpm0fiber 1: 1,2,1
006(in lab3?)

007tmo/rix fims and user meb's for tmo/rixfiber 4: 1,3,2 fiber 2: 1,3,1  fiber 1: 1,3,3fiber 2: 1,2,6 fiber 3: 1,2,5
008(in lab3?)

009rix hsd_2, hsd_31,3,18 (datadev0)1,3,15 (datadev1)
010timing (xtcav)None1,3,14 (datadev1)
011tmo opal (opal1)xpm01,1,13
012tmo opal (opal2, fzp)xpm0fiber 1 (data): 1,2,3 fiber 3 (data): 1,2,12
013rix opal (atm)fiber 2 (timing): 1,3,6fiber 1 (data): 1,3,5
014tmo epix and rix tebdon't caredon't care
015rix timing (timing, bld, andor_dir, andor_vls, epics, encoder mr2k1)None1,2,10 (datadev1)
016tmo tebNoneNone
017tmo hsd_31,1,11 (datadev0)1,1,9 (datadev1)
018tmo hsd_5, hsd_7 and rix ami manager1,1,1 (datadev0)1,1,5 (datadev1)
019tmo hsd_41,1,4 (datadev0)1,1,6 (datadev1)
020tmo hsd_1, hsd_21,1,12 (datadev0)1,1,8 (datadev1)
021tmo hsd_61,1,3 (datadev0)1,1,2 (datadev1)
022tmo hsd_8, hsd_91,1,7 (datadev0)1,1,10 (datadev1)
023rix hsd_0, hsd_11,3,8 (datadev0)1,3,9 (datadev1)
024hsd_peppex_0 hsd_peppex_12,4,3 (datadev0)2,4,4 (datadev1)
024tmo hsd_10, hsd_111,1,18 (datadev0?)1,1,16 (datadev1?)
fee-xpm0 amc0 port 0xpm3 in rix1,2,4


Fee teststand

June 5th, 2024 - drp-neh-cmp007  and drp-neh-cmp009 turned back on.

  • As a reminder - refer to Debugging DAQ#IPMI for power cycling with IPMI. The command-line tools leave the nodes off during a cycle.

On March 1 2024 cpo powered off these nodes to reduce heat load, since part of rittal rack cooling is broken:

 1007  /reg/common/tools/bin/psipmi drp-neh-cmp007 power off
 1010  /reg/common/tools/bin/psipmi drp-neh-cmp009 power off
 1013  /reg/common/tools/bin/psipmi drp-neh-cmp011 power off
 1015  /reg/common/tools/bin/psipmi drp-neh-cmp012 power off
 1017  /reg/common/tools/bin/psipmi drp-neh-cmp013 power off
 1019  /reg/common/tools/bin/psipmi drp-neh-cmp014 power off
 1021  /reg/common/tools/bin/psipmi drp-neh-cmp018 power off
 1023  /reg/common/tools/bin/psipmi drp-neh-cmp019 power off
 1025  /reg/common/tools/bin/psipmi drp-neh-cmp020 power off
 1027  /reg/common/tools/bin/psipmi drp-neh-cmp021 power off
 1029  /reg/common/tools/bin/psipmi drp-neh-cmp022 power off


  • document the function of each node
    result from cat /proc/datadev_0
nodeBuild StringFirmwareconnected toComment
cmp001DrpTDet: Vivado v2019.1, rdsrv301 (x86_64), Built Fri 12 Jun 2020 10:00:33 AM PDT by weaver

0x9

AMC1 port 0
cmp002DrpTDet: Vivado v2019.1, rdsrv301 (x86_64), Built Fri 12 Jun 2020 10:00:33 AM PDT by weaver0x9AMC1 port 1
cmp003ClinkKcu1500Pgp2b: Vivado v2020.1, rdsrv302 (x86_64), Built Fri 24 Jul 2020 11:12:11 AM PDT by ruckman0x4090000
epixM
cmp005ClinkKcu1500Pgp2b: Vivado v2021.2, rdsrv317 (Ubuntu 20.04.3 LTS), Built Thu 02 Dec 2021 01:31:54 PM PST by ruckman0x7110000
Camlink node
cmp007XilinxKcu1500Pgp4_6Gbps: Vivado v2023.1, rdsrv405 (Ubuntu 20.04.6 LTS), Built Thu 15 Feb 2024 07:21:18 PM PST by ruckman0x2050000
Wave8 (Controls)
cmp009XilinxKcu1500Pgp4_6Gbps: Vivado v2023.1, rdsrv405 (Ubuntu 20.04.6 LTS), Built Thu 15 Feb 2024 07:21:18 PM PST by ruckman0x2050000
Wave8
cmp010DrpTDet: Vivado v2019.1, rdsrv301 (x86_64), Built Fri 12 Jun 2020 10:00:33 AM PDT by weaver0x9AMC1 port 2
cmp011ClinkKcu1500Pgp2b: Vivado v2021.2, rdsrv317 (Ubuntu 20.04.3 LTS), Built Thu 02 Dec 2021 01:31:54 PM PST by ruckman0x7110000

cmp012ClinkKcu1500Pgp2b: Vivado v2021.2, rdsrv317 (Ubuntu 20.04.3 LTS), Built Thu 02 Dec 2021 01:31:54 PM PST by ruckman0x7110000

cmp013ClinkKcu1500Pgp2b: Vivado v2021.2, rdsrv317 (Ubuntu 20.04.3 LTS), Built Thu 02 Dec 2021 01:31:54 PM PST by ruckman0x7110000

cmp014Lcls2EpixHrXilinxKcu1500Pgp4_6Gbps: Vivado v2021.1, rdsrv303 (Ubuntu 20.04.3 LTS), Built Wed 08 Sep 2021 02:37:45 PM PDT by ruckman0x1020000

cmp015DrpTDet: Vivado v2019.1, rdsrv301 (x86_64), Built Fri 12 Jun 2020 10:00:33 AM PDT by weaver0x9AMC1 port 3
cmp017DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver0x6

cmp018DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver0x6

cmp019DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver0x6

cmp020DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver0x6

cmp021DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver0x6

cmp022DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver0x6

cmp023DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver0x6
HSD Readout
cmp024DrpPgpIlv: Vivado v2019.1, rdsrv302 (x86_64), Built Sun 14 Jun 2020 05:50:49 PM PDT by weaver0x6
Container HSD


  • resurrect XPM (Chris and Matt)
    • connect atca network
    • assign XPM name
    • start a pyxpm process in /cds/group/pcds/dist/pds/tmo/scripts/neh-base.cnf
    • ask Matt about XPM IP 
    • run a fiber from the FEE ATCA switch to room 208 ATCA switch (verify with Matt before doing it)
  • Connect multi mode timing fibers between XPM and appropriate nodes (drp-neh-cmpNNN 001-024)
  • Create and run a .cnf file with one timing system node
  • Measure optical powers on fibers
  • Remove all fibers
    • coming from a FEE cmp nodes,
    • from FEE XPM to patch panel
  • make sure weka file system is writable (Omar in case is not)
  • run multi mode fiber from XPM to each timing system nodes
  • No labels