You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Introduction

SLAC hosts a shared analysis computing facility for the US ATLAS members. The center provides CPU, disk space and software tools to support both Grid and non-Grid based physics analysis activities.

Getting started to obtain a SLAC computer account

This information is for users who want direct access to SLAC computers, not accessing SLAC computing resources via the GRID. The steps listed here may take days to complete here so plan early.

Register as a SLAC User

Related page: https://atlas.slac.stanford.edu/user-registration

Please complete the SLAC User Information Form to register to the SLAC User Organization. It should be done before you apply for SLAC computer accounts. Charles C. Young will be your "SLAC sponsor who will confirm your information" on the SLAC User Information Form. If you can not find your home institute in the pull down list during the online registration, you should contact Charlie Young before registration. When you received notification of user registration, go to http://www-public.slac.stanford.edu/phonebook/search.html, enter your name, and look up your System ID. It is needed when requesting computer account.

Obtain a SLAC Unix computer account

Related Page: https://atlas.slac.stanford.edu/computer-account

SLAC provides several types of computer accounts. For ATLAS related work, you need a UNIX account. A Windows account is also very useful if you will visit or stay at SLAC, or your want to access protected SLAC web pages. A Microsoft Exchange e-mail account is the preferred way to use SLAC e-mail.

To obtain a SLAC computer account, follow the Guidelines for SLAC Computer Account Requests, print and fill the forms. Make sure to check the box for UNIX account (and other accounts if needed), and put Charles C. Young as the "Computer Czar" on the SLAC Computer Account Form. You obviously need to contact Charlie Young and provide the forms and the name of your ATLAS group PI. Remote users can e-mail the filled form as attachment to Charlie (preferred), or inform him by email after faxing the forms to 650-926-2923.

Important: In the "Additional Instructions or Special Group Requirements" box in the SLAC Computer Account Request Form, please put in the following text:

Before informing user about account readiness, please create a ServiceNow ticket for unix-admin with subject "ATLAS user account XXX need additional setup" and the following in ticket body, and wait for its completion:
1. put this account in Unix secondary group "atlas" and "atlas-user"; 2. setup GPFS spaces (in this order).

Once you obtained your initial password, you are required to change it as soon as possible.

Please set your UNIX .forward file so that automatically generated e-mail notifications from the batch system, cron jobs, etc. will be forwarded to your preferred address. Simply put a line "your_email@domain.earth" in $HOME/.forward will work. If your preferred address is a SLAC Exchange e-mail, this line will be "your_exchange_username@exchange.slac.stanford.edu".

Subscribe to e-group

Please go to CERN e-group and (search and) subscribe to atlas-us-slac-acf. We will use this e-group for announcement and for user discussion specific to the SLAC-ACF. If you do not have an CERN account to subscribe to this e-group by yourself, please e-mail yangw@slac.stanford.edu to have your email address added to the group.

Login to SLAC

SLAC provides a pool of login nodes with CVMFS and Grid tools. You can access them by ssh to rhel6-64.slac.stanford.edu. Assuming your unix shell is /bin/bash, you may use the following as a template of your $HOME/.bashrc file

# setup ATLAS environment
export RUCIO_ACCOUNT="change_me"
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh --quiet
localSetupRucioClients --quiet
...

Type "alias" command to see additional "localSetupXXXX" commands.

Remote X window access

Please refer to SLAC's FastX page for detail instructions.

Disk space

SLAC provides to new ATLAS users two personal storage spaces

  • Home directory /gpfs/slac/atlas/fs1/u/<user_name>. The quote is 100GB. This space is backed up to tape.
  • Data directory /gpfs/slac/atlas/fs1/d/<user_name>. The initial quota is 2TB. To request quota increase to up to 10TB, please send e-mail to unix-admin@slac.stanford.edu. Data in this area is not backed up.

Some users already have computer accounts at SLAC. Their home directories may be on AFS and they don't have the data directories on GPFS. Those users can request moving their home directories to GPFS and creating data directories for them by sending e-mail to "yangw@slac.stanford.edu".

SLAC also provides Xrootd based storage that are managed by RUCIO. The data are read-only to users. However, it is possible to use R2D2 to request ATLAS datasets to be transferred to those space.

  • From the interactive login machines, one can browse the Xrootd storage by cd /xrootd/atlas
  • From batch jobs, one can't list directory in the Xrootd storage. But can access files via root://atlrdr1//xrootd/atlas/...
  • Using the rucio tools, one can list ATLAS datasets that are already in the Xrootd storage system via command rucio list-datasets-rse <RSE>, where RSE can be SLACXRD_DATADISK, SLACXRD_LOCALGROUPDISK, SLACXRD_SCRATCHDISK.

Submit batch jobs

SLAC uses LSF batch system. LSF replica your current environment setup when submitting jobs. This includes your current working directory and any Unix environment variable setups. The following are examples of using LSF:

  • submit a job
$ cat myjob.sh
#!/bin/sh
#BSUB -W180
pwd
echo "hello world"

$ bsub < myjob.sh
Job <96917> is submitted to default queue <medium>.

This will submit a job to LSF. The "pwd" command will print out the job's working directory, which should be the same the directory where this job is submitted. The #BSUB -W180 directive tells LSF that the job's maximum run time limit (wall clock time) is 180 minutes. After that the job will be killed. If #BSUB -Wnnn isn't specified, your job get the default, which is 30 minutes.

  • To check job state: use bjobs, or bjobs -l <JOBID> to get detailed info about a specific job.
  • You can use bkill <JOBID> to kill a job.
  • For details about those LSF commands and their options, please refers to the man pages of "bsub", "bjobs", "bkill".

For more information regarding high performance computing clusters, including LSF related documents and best practices, please refer to the SLAC High Performance Computing page.

Resource monitoring

Coming soon

  • No labels