To do list:

  • Add the ZYNQ based option

 

 

Item#

NameDescriptionPower(W)/cardCost($)/cardNote:
1FPGAXCKU040-2FFVA1156E15885Based on the LCLS-II production discount from Avnet.com
2DDR416GB SODIMM, MTA16ATF2G64HZ-2G32.1191Based on www.memory4less.com, power estimated using Micron DDR4 power calculator
3SSDSSD 960 EVO NVMe M.2 250GB5130Based on www.samsung.com
4OpticsWIB QSFP and Timing SFP2.4100Based on fs.com QSFP-SR4-40G transceivers @ 10GbE
5Misc.Misc components3250Estimate of support components
6FabricationFabrication of PCBN/A500Estimate of on experience for large quantities
7AssemblyAssembly of partsN/A250Estimate of on experience for large quantities
   Total Power (W)/cardTotal Cost($k)/card 
   27.52.306 
 Type

kLUTs/RCE

kFFs/RCE

DDR/RCE

BRAM/RCE

URAM/RCE

DSP48/RCE

SSD BandwidthNote

XC7Z045-2FFG900E

(proto-DUNE)

218

437.2

0GB

19.2Mb

0Mb

900

0GB/s2x RCEs per DPM

XCZU15EG-1FFVB1156E

(Oxford DPM)

341

683

32GB

26.2Mb

31.5Mb

3,528

1.4GB/s

(PCIe 2x4)
1x RCEs per DPM

XCZU6EG-1FFVC900E

(ATLAS DPM US+)

2154298GB25.1Mb0Mb1,973

1.4GB/s

(PCIe 2x4)

2x RCEs per DPM

XCKU040-2FFVA1156E

(DUNE PCIe)

24248516GB21.1Mb0Mb1,920

1.9GB/s

(PCIe 3x4)

1x FPGA per PCIe Card

 

 

  • Assume "IF" we can do compression, then factor of 3 or better compression factor
  • Use the same FPGA that's used in LCLS-II to get a big discount
  • Assumes channel aggregation at the WIB
    • 320 channels per link @ 10 Gbps (64/66 encoding)
    • 2 links per PCIe (no compression)
    • 4 links per PCIe (with compression)
  • If "passive WIB" option (1Gbps/link), then replace the QSFPs with SNAP12 from reflexphotonics
    • Use the SELECTIOs instead of GTs in this slow rate condition
  • This FPGA has 20 GT channels
    • 1x lane for FPGA based 10 GbE
    • 3x lanes for SLI interface
    • 4x lanes for QSFP
    • 8x lane for PC's PCIE interface
    • 4x lanes for M.2 PCIE interface
  • Support up to 3x PCIe interfaces (only 2 required)
  • Assuming timing network only requires LVDS I/O (same as proto-DUNE)
  • Number of PCIe card per APA
    • 4x PCIeCard per APA (no compression)
    • 2x PCIeCard per APA (with compression)
  • Use a SLI interface to create a full mesh point-to-network for 1 APA
  • Can support up to 12x PCIe slots per server
  • Reliability and Hot-Swapability
    • The power supplies and fans are the first things that we expect to fail
    • 8 Hot-Swap 92mm cooling fans
      • 4 x 2 configuration
    • 2000W Redundant (2+2) Power Supplies; Platinum Level (94%+)

  • Backend communication through two 10GbE port (4028GR-TRT version)
  • Standard 1GbE IPMI interface
  • No fans on the FPGA for deployment servers
    • Cooling through the servers redundant fans
    • Up to 200 LFM airflow
    • Use ATS-P1-152-C1-R0 on the FPGA, which will have no air blockage from the QSFP/SFP
      • HEAT SINK 35MM X 35MM X 35MM

      • PCIe gap is is 40.64mm between slots (designed for double wide GPUs).
      • Picture of heatsink

    • Thermal Resistance @ Forced Air Flow: 3.29°C/W @ 100 LFM 

      • FPGA junction temperate would increase by ~50°C (15W x 3.29°C/W)

      • FPGA operational temperature: 0°C ~ 100°C (TJ)

      • This means max. operational ambient temperature would be 50°C

        • Expecting 25°C nominal ambient temperature

        • 25°C of design margin @ 100 LFM (up to 200 LFM on the fans)
  • For development test stands that use a standard PC, a fan will be required
    • Because standard PC typically don't do force air on the PCIe slots
    • This PCIe card design will need to support both configurations as loading options
  • Server's Operating Temperature: 10º to 35º C (50º to 95º F)
    • Recommend doing rack cooling if underground rack room temperature is not temperature controlled
    • The ATCA design will allow require racking cooling if if underground rack room temperature is not temperature controlled

Without Compression

  • Power
    • Power per server: 630W = (300W/server) + (12 PCIeCard/server) x (27.5W/PCIeCard)
    • Power per system: 31.5kW = (50 server/system) x (520W/server)
  • Rack Space
    • Space per server: 4u = (7" height) x (1u/1.75")
    • space per system: 200u = (50 server/system) x (4u/server)
  • Cost
    • Cost per server: $32.922k = ($5.25/server) + (12 PCIeCard/server) x ($2.306/PCIeCard)
    • Cost per system: $1.65M = (50 servers/system) x ($32.922k)

Summary:

Item

Without compression

With compression

Total Power

31.5kW

15.75kW

Total Rack Space

200u

100u

Total Cost

$1.65M

$0.825M

 

 

 

 

 

 

 

  • No labels