Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

10-26-2021: As we migrating to SDF, decommissioning old hardware and RHEL6, we will no longer actively update the fair shares in this page - some of the major stakeholders no longer use LSF batch system in large scale.

The Shared (General) Farm consists of several physical clusters that are available to all SLAC users. The cluster hardware was purchased incrementally over several years by various stakeholder groups. Each physical cluster is based on a specific hardware model but all hosts in the farm run 64bit RHEL/CentOS6. The LSF "general" queues feed user's jobs to the shared farm. The stakeholders have associated LSF user groups with a fairshare (scheduling priority) that reflects their cluster investment. This ensures that stakeholders will always get some runtime on the cluster when utilization is high. Users that are not members of stakeholder groups still have the ability to run jobs "for free". There is a superset fairshare group "AllUsers" that includes all SLAC users. A non-stakeholder must compete for priority with all other users running jobs on the shared farm. This may be acceptable for some, but production environments may demand priority scheduling. The free "AllUsers" fairshare is actually subsided by the paying stakeholders. A stakeholder's fairshare value is derived from the compute power they have purchased. A HS06 CPU benchmark is calculated for each cluster server model. 


Stakeholder Investments

 


Cluster# of hosts# cores/host# of cores# Batch slotsHS06/slotHS06Owner/GroupPurchased dateNotes
fell - - - - - - -2007 IGNORE IGNORE - obsolete hardware in run-to-fail mode
hequ39831231215.014683.12ATLAS9/23/2009 RUN-to-FAIL. RHEL6
hequ76860860815.019126.08Fermi9/23/2009 RUN-to-FAIL. RHEL6
hequ 77861661615.019246.16Babar9/23/2009 RUN-to-FAIL. RHEL6
dole381245645613.776279.12ATLAS10/19/2010 RUN-to-FAIL. RHEL6
kiso6824163213609.9713559.2ATLAS9/23/2011RUN-to-FAIL. CentOS7  HT enabled/ 20 slots per hosthost 
bullet77.5161240124016.0519902PPA 12/5/2012RUN-to-FAIL. RHEL6 for MPI use - do not map to fairshare
bullet76.25161220122016.0519581Fermi12/5/2012 RUN-to-FAIL. RHEL6
bullet17.751628428416.054558.2Geant12/5/2012 RUN-to-FAIL. RHEL6
bullet47.251675675616.0512133.8ATLAS2/13/2014 RUN-to-FAIL. RHEL6
bullet49.251678878816.0512647.4PPA2/13/2014 RUN-to-FAIL. RHEL6
bullet19.51631231216.055007.6Theory2/13/2014 RUN-to-FAIL. RHEL6
bullet32.51652052016.058346Beamphysics10/2/2014RUN-to-FAIL. RHEL6. for MPI use - do not map to fairshare
deft(i)222452852814.977904ATLAS1/1/2016CentOS7 HT disabled, two purchases at 11/2015 and 1/2016
deft(i)72416816814.972514.96Fermi6/1/2016 CentOS7

bubble(i)

83628828821.646232Fermi3/2018 (question)CentOS7. HT disabled
bubble(i)123643243221.649348.48Beamphysics9/2018CentOS7.Priority for Beamphysics MPI use - do not map to fairshare

...

USER/GROUPHS06_SHARES%HS06_SHARESHS06_OWNER
    




atlasgrp3787535.32%ATLAS
babarAll78597.33%BaBar
glastdata8540.80%Fermi
glastusers2532023.61%Fermi
glastgrp3660.34%Fermi
geantgrp38743.61%Geant
luxlz35003.26%PPA others
cdmsdata20001.86%PPA others
lcdprodgrp11001.03%PPA others
exoprodgrp15001.40%PPA others
hpsprodgrp10000.93%PPA others
rpgrp5000.47%PPA others
lcd6000.56%PPA others
exousergrp5500.51%PPA others
rdgrp00.00%PPA others
theorygrp42573.97%Theory
All Users1608615.00%Everyone (15% tax)

...

This effort paves the way for a possible chargeback model where Computing Division could lease cluster hardware and charge stakeholders a rate for fairshares.