Introduction

SLAC hosts a shared analysis computing facility for the US ATLAS members. The center provides CPU, disk space and software tools to support both Grid and non-Grid based physics analysis activities.

Getting started to obtain a SLAC computer account

This information is for users who want direct access to SLAC computers, not accessing SLAC computing resources via the GRID. The steps listed here may take days to complete here so plan early.

Register as a SLAC user and obtain a SLAC computer account

Please refers to instructions at: https://atlas.slac.stanford.edu/atlas-support-center, under "USER INFORMATION".

Subscribe to e-group

Please go to CERN e-group and (search and) subscribe to atlas-us-slac-acf. We will use this e-group for announcement and for user discussion specific to the SLAC-ACF. If you do not have an CERN account to subscribe to this e-group by yourself, please e-mail yangw@slac.stanford.edu to have your email address added to the group.

Login to SLAC

SLAC provides a pool of login nodes with CVMFS and Grid tools. You can access them by ssh to rhel6-64.slac.stanford.edu. Assuming your unix shell is /bin/bash, you may use the following as a template of your $HOME/.bashrc file

# setup ATLAS environment
export RUCIO_ACCOUNT="change_me"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh --quiet
localSetupRucioClients --quiet
...

Type "alias" command to see additional "localSetupXXXX" commands.

JupyterLab

JupyterLab environment is also available at http://jupyter.slac.stanford.edu. Login with your SLAC Unix account. You can choose any JupyterLab environments available from that page (some of them will allow access to the GPU resources), though only the ATLAS image will mount your GPFS home directory and data directory. The ATLAS image provides PyROOT and JupyROOT (ROOT C++) notebooks with capability to access remote data via the Xroot (root) protocol or webdav (http) protocol. It also includes several Machine Learning software package. It also allows storing notebooks to Google Drive for portability.

 

Remote X window access

Please refer to SLAC's FastX page for detail instructions.

Disk space

SLAC provides to new ATLAS users two personal storage spaces

Some users already have computer accounts at SLAC. Their home directories may be on AFS and they don't have the data directories on GPFS. Those users can request moving their home directories to GPFS and creating data directories for them by sending e-mail to "yangw@slac.stanford.edu".

SLAC also provides Xrootd based storage that are managed by RUCIO. The data are read-only to users. However, it is possible to use R2D2 to request ATLAS datasets to be transferred to those space.

Submit batch jobs

SLAC uses LSF batch system. LSF replica your current environment setup when submitting jobs. This includes your current working directory and any Unix environment variable setups. The following are examples of using LSF:

 Submit a job

$ cat myjob.sh
#!/bin/sh
#BSUB -W180
pwd
echo "hello world"

$ bsub < myjob.sh
Job <96917> is submitted to default queue <medium>.

This will submit a job to LSF. The "pwd" command will print out the job's working directory, which should be the same the directory where this job is submitted. 

 Manage jobs

Use bjobs, or bjobs -l <JOBID> to get detailed info about a specific job. Use bkill <JOBID> to kill a job

More info on LSF

Resource monitoring

Coming soon