Blog from September, 2013

Scientific Computing Services has provided on-going Drupal Unix Operating System support, including installing  and managing PHP, installing two additional Drupal virtual machines, IP networking reconfiguration for 10 existing Drupal hosts, and installing and verifying configuration management infrastructure to insure identical Drupal installations.  Drupal is part of the Lab's Web Intranet Portal initiative.

PPA Lustre filesystem capacity will be doubled from ~170TB to ~340TB usable space. Servers will be relocated and connected to the bullet cluster via 40Gb/s Infiniband.

Filesystem upgrade complete. https://confluence.slac.stanford.edu/display/SCSPub/PPA+Lustre+filesystem+2014+upgrade

Purchase req created for compute cluster expansion. 1648 additional cores with Infiniband. Networking hardware has arrived and is being installed.

Expansion is complete and new cores are now in production.

 

RT462268 Storage for MCC

Ordered 240TB storage configuration for MCC

Storage server with ~28TB has been ordered for ACD

Scientific Computing Services has deployed new interactive login Virtual Machines for KIPAC. These VMs are a lifecycle replacement for an older pool of machines and provide customers with more network bandwidth and compute power.

Scientific Computing Services completed the Unix infrastructure sections of the Quarterly FISMA Data Call.   SCS staff modified some of the existing reporting tools, providing more readable reports and records for these data calls, and improving the general auditing process.

Scientific Computing Services has acquired the hardware and software for a development GPFS parallel file system.  Installation and setup of this development environment will begin this month, with GPFS software testing to commence soon thereafter.  This will enable SCS team members to learn how to use GPFS to manage future disk storage for SLAC's scientific community. 

Scientific Computing Services completed its work to support the IPv6 project. This included adding IPv6 configuration support to the configuration management system and adding IPv6 support to SLAC's outgoing DNS servers.  This moves SLAC toward compliance with the DOE mandate for IPv6 readiness.

A member of Scientific Computing Systems has completed work to enable webauth to integrate with Windows Desktop Single Sign on for the Drupal project. This feature will be deployed on September 9th, enabling a properly configured browser to use the desktop's kerberos credentials to access webauth protected pages without requiring the user to re-type the username and password at the webauth login screen.

Scientific Computing Services recently modified the LSF batch configuration to improve the scheduling of parallel MPI jobs which may request all of the CPU cores on one or more hosts.  This was an issue on the PPA-funded "bullet" cluster which provides compute cycles for both single-slot and parallel MPI jobs.  SCS is also working with the batch system vendor (IBM) in order to leverage features that may improve the batch MPI service for scientific computing customers.

Several system administrators from Scientific Computing Services attended an Intel presentation on the Lustre parallel file system.   The discussion about the types of applications that are suited to Lustre and best practices for storage hardware configurations was beneficial for our support of more than 3 PetaBytes of SLAC scientific data stored on Lustre servers.

Hardware for the SCS GPFS development filesystem has started to arrive in building 50. Hardware installation is finished and ready for GPFS development and test work.

330TB Fermi xrootd server (FERMI-XRD012) is installed and in production.

Ordered new KIPAC servers that will host interactive login VMs. Installation is complete and VMs are in production.

KI-NFS05 is now in production.

Installation is complete. (Jul 17)

New database server (EXODB01) has been installed and is in production. Compute server online and hosting EXO VMs. Storage server (EXOSERV05) is in production.