This meeting is triggered by two things: expected operation of Fermi past the initial 10 year goal; and sharp reduction in DOE support when that happens. The FSSC has already got engaged in Mission Planning. The goals of the meeting are:

  1. identify shortfalls in effort from the DOE ramp down at SLAC
  2. bring the LAT, FSSC and HEASARC efforts closer together and eliminate duplications if possible
  3. plan for the long haul - shore up support and look to see what, if any, modernization is appropriate for long term support
  4. smooth out any existing wrinkles in the FSSC taking on Mission Planning

 

Proposed date: week of Feb 6

 


Possible Session Blocks (see Scope below):

 

  • Overview of SAS-related activities
    • scope of support
    • already known areas of shortfall
  • Mission Planning tagup (Date/Time: Monday Feb 6, 10AM - 12noon. Location: TBD).
    • optimization of ongoing transition
    • walkthrough/demo of weekly planning
    • review of documentation
  • ISOC software
  • Code development infrastructure
    • python distributions
    • repository
    • release management 
    • code distribution
  • Science Tools
    • external packages
    • fermiPy support
  • L1 processing & halfpipe 
    • operational support
    • system tests, new version validation and approval
    • reprocessing 
  • VMs, containers
    • RHEL6 is end of road for GR
    • automatic creation of VMs (do we need containers too?)
  • Data Issues
    • change datasets delivered to FSSC?
    • are data servers at SLAC still needed?
    • change of storage model at SLAC?
  • GlastRelease futures
    • GR code development
    • any issues on long term support of ROOT files etc?

 


 

Scope:

  •  ISOC/Ops/Planning software
  •  Robin, Jerry and Elizabeth will be at SLAC and can hopefully resolve some of the connectivity issues we’re having
  •  Face-to-face training for planning process
  •  Decide on a timeline to perform shadow operations
  •  Discuss leap second process since no shadowing occurred this time
  •  SAS development
  •  Incorporating more python
  •  Removing external packages
  •  Repositories, Issue Tracking, Release Managers, etc.
  •  Data retention
  •  Discuss sending critical data sets (like electron data) to GSFC for long-term retention
  •  Future reprocessing
  •  Potential change of storage model at SLAC (more tape reliant)
  •  L1 Pipeline
  •  Monitoring & issue resolution
  •  Test process as updates are required
  •  Misc items
  •  All systems - update for new OSs? Or maintain current on VMs?
  •  Data server - Only at GSFC?
  •  GLEAM (aka reconstruction chain)
  •  System Tests
  •  Investigating conda and conda-forge as a distribution channel for the STs
  •  Using docker for software distribution
  •  Universal linux binaries for Science Tools (i.e. distribution agnostic)
  •  Cloud-based CI services (Travis, CircleCI) for compiling and/or testing

Meeting process:

- Should have lots of time for breakouts. Spread topics across multiple days. Working meeting. Not lots of presentations.
- Use SLACK channel for discussion. Make new channel. Name?
- List people being lost and their tasks, people still available. Map tasks to names as appropriate.
- Coordinate the meeting through the existing DRSC list. Seems to hold most of the important members. Add as needed. Or software mailing list.

Attendees Pool:


Richard Dubois, Elizabeth Ferrara, Robin Corbet, Jerry Bonnell, Julie McEnery, Jeremy Perkins, HEASARC representative, Matt Wood, Eric Charles, Nicola Omedei, Giacomo Vianello, Troy Porter, Jim Chiang, Steve Tether, Heather Kelly, Tom Glanzman, Warren Focke, Tony Johnson, Brian Van Klaveren, Max Turri, Charlotte Hee, Maria-Elena Monzani, Seth Digel, Regina Caputo, Alex Reustle, Joe Asercion, Don Horner, Mattia DiMauro, Simone Maldera, Elisabetta Cavazzuti, Michael Kuss, Samuel Viscapi, Joanne Bogart, Gregg Thayer, Rob Cameron, Luca Baldini, new FSSC software hire (?), Leon Rochester, Tracy Usher

  • No labels

6 Comments

  1. Do we currently have a time agenda for topics/meetings?  

    HEASARC folks can call in Tuesday or Wednesday (ideally Tuesday, if possible), they just need to know when to be on the phone.

  2. Also, where do FSSC people need to be on Monday?  Are we meeting at a specific place/time on the SLAC campus or is this TBD?

    1. I've updated the planning page with room reservations. I suggest we meet in SUSB Tulare at 9 on Monday to kickoff.

    2. Found this map on the site.

      https://vue.slac.stanford.edu/slac-visitor-map

      SUSB is B53, right by the entrance. Richard says Tulare is on the 4th floor.

  3. Unknown User (sviscapi)

    Hi all,

    I did some experiments with Docker and the Science Tools months ago.

    I'm attaching a Dockerfile to this post, which should hopefully build a valid CentOS 7 container with the latest ST and Heasoft installed.

    Dockerfile

    I hope I'll be able to attend (remotely) the talks next week.

    Cheers, Sam

    1. I've also been experimenting recently with running the STs in a docker container.  I've created a set of docker images based on the SLAC RHEL6 builds using a CentOS 6 container.  The Dockerfiles are maintained here:

      https://github.com/fermiPy/fermipy/tree/master/docker

      And instructions for installing/running the docker images are here:

      http://fermipy.readthedocs.io/en/latest/install.html#installing-with-docker

      There is a fermipy group on docker hub that has an automated image build with both the STs and an up-to-date installation of anaconda python:

      https://hub.docker.com/r/fermipy/fermipy/

      I think docker could work well as a fall-back for people who are unable to use the standard release tarballs.  These can also be used to run the STs on OSes that we don't explicitly support like Windows 10 – Toby says he was able to get the fermipy image to work on his own Windows machine with a few tweaks.  

      The only real downsides I've found in using docker are 1) slightly worse performance on OSX vs native builds (on OSX the docker images actually run on a virtual machine) and 2) somewhat complicated setup to enable interactive graphics.  Docker also requires a fairly recent version of the Linux kernel so it's not an option for people using older Linux distros like RHEL6.  There's also currently no support for docker at SLAC so that makes it a little difficult to test/build images.