Skip to end of metadata
Go to start of metadata

The KIPAC community includes Stanford physics and SLAC National Accelerator Laboratory researchers along with many outside collaborators.  Access and effective use of computational resources is central to the research taking place.  This collection of pages is intended to provide guidance and information to those users of our computing resources, particularly for new users.  These include parallel computing clusters, large memory machines, parallel file systems, and visualization capabilities.  We have some central software installations, licensed packages, network disk storage and backup.  Let's get started:

Basics

SLAC computing

Research Computing at Stanford's SRCF


Basics

  • People:

  • Accounts: The first step is getting an account.

    • SLAC: Some affiliation is required, either as employee or as a SLAC "user". 
      • Martha Siegel is the main contact for new accounts or changes to existing accounts.  It is best to contact Martha first before completing the next steps.
      • If not an employee, use the registration form on the SLAC Users Organization main page to obtain and id-number. This step generally takes up to one day.
      • With your SLAC id-number in hand (sid) apply for SLAC computing accounts using this form.
      • unix account: This is the main account you need for most work
      • windows account: You will need this also if you end up accessing internal web pages etc.  Most users should get both windows and unix.  At this time they are completely separate.
      • email: Generally ask (on the form) to have your email forwarded to your primary (offsite) account unless you are a slac employee.
    • Stanford: Affiliation required.  Your Stanford id works across all official Stanford resources.
  • Getting help:
  • Wireless at SLAC:

    • eduroam or visitor: both should work, absence over 2 weeks requires re-acceptance of terms/conditions via web page

    • Note conference rooms will have (future) local LANS supporting peer-to-peer communication and the use of "chromecast" devices to display wirelessly.

      • Kavli 3rd floor: use essid/wpa2 B051R305-AV/kipac305, then choose the front-port input which has a chromecast.

    • printing from wireless (using CUPs as on Mac or linux)

      • Find out the name and location of the printer and printer queue you want to use (usually labeled on the printer somewhere)

      • Bringer up your printer configuration application and enter:

        • Device URI: lpd://printserv.slac.stanford.edu/<queue-name>

        • Choose the printer type from the application menu or choose "generic postscript" if the actual model is not listed.

        • Enter fields for a description and location

        • If you can't find the mapping from printer name/location to the queue-name send email to unix-admin and ask.  SLAC computing should make this information trivial to find so don't feel bad about asking.  It is not clear why this is not readily available.

SLAC computing
  • AFS: home directories and various system-wide uses (openAFS)
    • Limited size, starts at 2GB but can ask for more, ask me when needed or self service.
    • Well backed up, self recovery from path “~/.backup”
    • Recommend for code, papers, figures, individual stuff but NOT data or any files used by batch jobs
  • NFS: Our principal storage type
    • Path is generally /nfs/slac/g/ki/<partition>/<username or group>
    • Backed up but requires admin assist to recover files
    • Contact me for allocation. Generally 100s GB to several TB.
    • Newest systems are GPFS with additional features.
      • Native path: /gpfs/slac/kipac/fs1/(u or g)/(username or group) should give best performance on batch jobs
      • NFS path: /nfs/slac/kipac/fs1/(u or g)/(username or group) should work everywhere
  • Lustre: Parallel file system for parallel batch jobs
    • If you need high throughput on parallel batch jobs this is the appropriate space to use.
  • Software
    • /afs/slac/g/ki/software/<package> has many packages intended for general use at kipac. (/afs/slac/g/ki/software/local/bin in PATH)
    • Mathematica network license for ~7 concurrent users. See me about how to use remotely.
    • IDL network license for ~13 concurrent users. See me about remote useage. Also about using in batch.
    • Python is widely used.  One can start with a vanilla installation by SLAC in /usr/local/bin which is based on the anaconda distribution.  KIPAC is maintaining a more extensive anaconda installation in: /afs/slac/g/ki/software/anaconda/x86_64-2.7/bin.  Please see me if you need an update or additional package.  Also see https://www.continuum.io/learn-more-about-anaconda.
    • Other packages occasionally obtained for individual uses.
Research computing at SRCF

Heavily parallel KIPAC research computing is migrating to the Sherlock Cluster at the SRCF.

  • For access to the cluster contact Stuart Marshall
  • http://sherlock.stanford.edu/mediawiki/index.php/Main_Page
    • ~120 general nodes of 16 core Intel with Infiniband, Lustre FS etc.
    • ~440 nodes in the "iric" partition for SUNCAT, KIPAC, SIMES, & PULSE totaling ~7000 cores (use -p iric in batch submission)
    • 1 large memory node with 48 cores, 1.5 TB memory (sh-25-23)
    • SLURM batch management (see docs)
    • Ongoing storage expansion with KIPAC share increasing to ~800TB Lustre space (mid February)
    • Data transfer via globus, bbcp, scp.
  • Summary of iric nodes:
% sinfo -p iric -o '%5D %4c %6z %8m %60f'
NODES CPUS S:C:T MEMORY AVAIL_FEATURES
11 16 2:8:1 64000 CPU_IVY,E5-2640v2,2.00GHz,GPU_KPL,TITAN_BLACK,titanblack
8 16 2:8:1 128000 CPU_HSW,E5-2640v3,2.60GHz,GPU_MXW,TITAN_X,titanx
130 16 2:8:1 64000 CPU_IVY,E5-2650v2,2.60GHz,NOACCL,NOACCL
1 48 4:12:1 1550000 CPU_HSW,E7-4830v3,2.10GHz,NOACCL,NOACCL
293 16 2:8:1 64000 CPU_HSW,E5-2640v3,2.60GHz,NOACCL,NOACCL

 


 

  • No labels