Scientific Computing Services has upgraded two clusters in the batch general fairshare queues to RHEL6-64. Outbound TCP connections from these two clusters (dole and kiso) are also enabled. This will allow ATLAS and other experiments to run computational jobs on their required operating system and will also permit those jobs to access large volumes of data outside of SLAC.
Scientific Computing Services successfully migrated its High Performance Storage System (HPSS) databases from raw disk partitions to file system partitions. Future HPSS software upgrades will make file system partitions mandatory for database storage. This migration proactively enables SLAC to remain current with those planned changes.
Scientific Computing Services completed the migration to the latest version of LSF 9.1 (Load Sharing Facility) for batch job management. The upgrade was done with assistance from our science customers and from Neal Adams in the Platform group within the Computing Division. This release of LSF provides many new features of interest to the scientific computing community.
Scientific Computing Services recently installed a new GPU compute server for SSRL. The system includes an NVIDIA 'Kepler' GPU with 2496 cores and the CUDA programming environment. This hardware configuration could form the standard for a larger GPU cluster which would address needs expressed by other customers. The batch compute system migration to LSF 9.1 will also provide better integration and support for GPUs for the scientific community.