Blog from June, 2013

Five staff members within Scientific Computing Services attended ITIL (Information Technology Infrastructure Library)  training this week.  The ITIL framework for best practices in Information Technology enables us to improve the planning and delivery of our computing support services for the lab.

Scientific Computing Services updated our public Confluence page, adding metrics, recent accomplishments, and current and planned activities.   This information provides our scientific computing customers with greater insight into our workload, tasks, and status.

Scientific Computing Services has 3560 machines under configuration management, an increase of 3.2% over the previous month. This increase is primarily in batch systems which provide additional support for scientific computing at the lab.

Scientific Computing Services has added 200 tapes to our tape libraries, providing more than a PetaByte of tape storage for our LCLS customers.

Scientific Computing Services has completed the initial tuning of the PPA cluster hardware for parallel computation. Test runs included 256-core and 1024-core jobs using OpenMPI on the 40Gb/sec Infiniband network. All 2900 compute cores will be made available to the general queues in addition to a high priority MPI queue. This tuning has improved the overall performance of the cluster for scientific computing and research.

Cyber Safety (critical to keep all Lab computing services in operation)

2013/04/12: Scientific Computing Services developed the automated tools for reviewing accounts with elevated privileges. A process was established for handling this review at regular intervals. In response to a DOE finding, 170 tickets were created to review and approve privileged accounts. This supports the Cyber Safety program at SLAC and meets the DOE deadline for this security requirement.

Scientific Computing Services responded quickly to the April 9 power fluctuation and temporary chilled water loss that impacted services for research computing. In addition, SCS revised the documentation and processes surrounding emergency response to such an event. This enhances our ability to provide continuity of services for the lab.

Cyber Safety (critical to keep all Lab computing services in operation)

Scientific Computing Services held a joint meeting with the Cyber Safety team to summarize our framework for Unix system management, continuous monitoring and reporting. This review provided fundamental information that the Cyber Safety team can use in preparation for the upcoming IG Audit scheduled for the end of May.

Scientific Computing Services implemented Nagios monitoring and alerting for AFS quotas. This is currently being used by Fermi and MCC, but is available for any interested group. This service enables customers to be informed about their AFS quota before it reaches its maximum limit, thereby providing time to take corrective action and minimize the potential for quota-related computing problems.

Scientific Computing Services is working with IBM to give a presentation on the new features of LSF 9.1 to scientific computing customers. Along with the presentation, SCS staff will provide an overview of our use of MPI applications in our cluster environment. This interaction will improve understanding between IBM and SLAC regarding the use of LSF and clarify features that would be valuable in this software product.

The new PPA bullet cluster (~2900 cores) is now available to all SLAC Unix users via the production batch system. This introduced the capability of selecting a newer release of the RedHat operating system. Scientific Computing Services worked with key customer groups including Fermi, KIPAC and EXO in order to minimize disruption to their production environments and ensure the cluster will support parallel and single-core jobs.

Cyber Safety (critical to keep all Lab computing services in operation)

Scientific Computing Services provided responses, documentation and artifacts for the IG audit questions regarding Unix infrastructure for Configuration Management, Identity and Access Management, and Remote Access Management. This is in support of our Cyber Safety program at SLAC and prepares us for the audit that will occur at the end of May.

Cyber Safety (critical to keep all Lab computing services in operation)

Scientific Computing Services revised documentation for UNIX tape backup policies, procedures and scheduling in response to a request for information for the IG Audit. This supports the Cyber Safety program at SLAC.

LCLS users reported that they were unable to access various files stored on a 1PB Lustre filesystem. Scientific Computing Services diagnosed the problem and ran utilities to repair file system inconsistencies, restoring the access to user's files.

Cyber Safety (critical to keep all Lab computing services in operation)

Scientific Computing Services applied a mitigation for a very serious security vulnerability which affected 1,042 managed Red Hat Enterprise Linux 6 hosts. SCS applied this mitigation using central configuration management within hours of learning of the vulnerability, thereby preventing a published exploit, which was actively compromising systems on the internet, from affecting the SLAC network and impacting scientific computing resources.

Following an unexpected power outage on Thursday, May 30th, Scientific Computing Services restored services within 4 hours of the return of power and chilled water to Building 50. SCS also responded to the failure of a controller in the PCDS/LCLS Lustre storage system, returning it to service by Friday evening. The restoration of services enabled the Scientific Computing community to continue with their experiments and programs.

Scientific Computing Services worked with Datacenter Technical Coordinators to modernize the server management infrastructure in Building 50. New server installations no longer require obsolete serial communications hardware. This will reduce cost overheads and shorten the amount of time required for initial system setup and deployment.

Cyber Safety (critical to keep all Lab computing services in operation)

Scientific Computing Services responded to requests from the visiting KPMG team related to Unix accounts, elevated privileges, security, system management, logging, monitoring and the process for handling changes. This provided the IG Audit review team with information and substantiation of how SLAC handles the centrally-managed systems and services.

The Scientific Computing Services storage team contacted NERSC and Vanderbilt University to gather information about their General Parallel File System (GPFS) deployments. This allows us to learn from their experiences as we look at beginning our own deployment for SLAC scientific customers.

Scientific Computing Services worked with Fermi, Atlas, and BaBar to reallocate 10,000 shares from each group and provide a total of 30,000 shares to the Theory group on a temporary basis. A special queue has been set up to provide the parameters that would enable Theory to use the shares in a more intense manner than the regular queues would allow. This will help the Theory group prepare for the Snowmass meeting at the end of the month.

Scientific Computing Services has been working with Networking to test link aggregation for critical servers. The testing is complete and SCS will begin rolling out this networking protocol to other crucial servers over the next few months. In the event that one of the networking connections fails, this strategy provides network redundancy and increases the availability of critical services for the lab.

Issues:

Scientific Computing Services is still dealing with fallout from the May 30 power outage. Approximately 20 batch machines are down due to hardware issues that developed as a result of the sudden power loss.

Scientific Computing Services continues to work on LCLS/PCDS storage problems following the May 30 power outage . A hardware RAID controller failed and may be responsible for corrupting one of the 1PB Lustre file systems. Repair work is underway and the file system is currently offline.