Currently

Currently SuperCDMS is taking data in Soudan, Minnesota. Commissioning soon done. Then data taking for a couple of years.

Goals:

  • Match Xenon100 DM limit
  • Prove that the iZIP design works and can be used for SNOLAB

SLAC CDMS activities:

  • MC production using the SRS pipeline:
    • 100 cores allocated to CDMS:
      • Routinely get a lot more and depend on that (added 02.17.2012: "... depend on spikes of >>100 cores". The DC allocation of 100 is sufficient)
      • Revived capability to run at SMU (1200-1500 cores)
    • SLAC CDMS person running the pipeline is moving to DOE in April:
      • Working on streamlining the production process (Detector MC code).
      • Goal is to have (some) external collaborators being able to run it. Along with (some) SLAC group members.
  • MC data stored at SLAC:
    • Trying to introduce the Data Catalog
    • Final MC data copied over to local institutions
    • Large intermediate steps (to save CPU in case of reprocessings) stay at SLAC
  • May store Soudan data at SLAC:
    • Size determined by calibration samples. TBD.
  • To ROOT or to Matlab:
    • Note that while data is processed and stored in Root format, it's mostly analyzed using Matlab down at the Stanford campus CDMS analysis cluster.
    • Currently too expensive for SLAC as we don't get the edu discount ($5k vs $100 for a single general licence). We currently have a few individual licences.
    • No reason this has to be like this (Fermilab has edu discount). General Matlab licences at SLAC would be great. Not heard back from Teri about this,
  • SCA support:
    • Tiny fraction of Tony for the occasional pipeline debugging (like running at SMU)

SNOLAB

There is a SNOLAB Software R&D Working Group with me as coordinator.

  • Will start up when Soudan commissioning is done i.e. soon.
  • Main goals are to look at scalability and automation (many things are currently being done by hand - may not scale well with many detectors)

A lot of the work will be "internal" i.e. evaluate what we have as there is a lot of legacy code/habits.

In addition, as CDMS is a small experiment with few software professionals and with modest computing and software needs I think it makes sense to avoid as much as possible developing new things and instead see what is available (from SCA) and adapt anything useful/needed. For manpower needs, this means continous internal work and more limited expert help from SCA (installing a database or help to adapt a build tool example).

SCA related things to consider:

  • Software releases & build tools (but very modest needs)
  • Data storage (Data catalog)
  • Data skimmer
  • Data processing (test facilities + SNOLAB) (pipeline)
  • Analysis cluster: Use SLAC? Hybrid solution with SLAC (data storage + skimmer) + Campus
  • Databases: Conditions, calibrations?
  • Collaboration tools

In short:

  • Will be happy if we can adapt many of the existing SCA tools. Main 'worry' is therefore long term SCA support of these tools (and internal acceptance of these).
  • No labels