Basics
Register as a SLAC user and obtain a SLAC computing account.
Account info can be found on the Tier2 page, along with other info on local computing at SLAC.
Among the various public machines at SLAC, for ATLAS work you should ssh to either:
rhel4-32.slac.stanford.edu (a cluster of a few nodes good for ATLAS work)
atlint01.slac.stanford.edu (an ATLAS machine)
If you have any questions please post them to the Non-Grid Jobs at SLAC Forum in the ATLAS Hypernews system,
where all local SLAC ATLAS computing issues are discussed. (If you don't have an ATLAS account, please mail me.)
How to get up and running quick
Code Block |
---|
#Do this just once: cd cp ~ahaas/.bashrc . cp ~ahaas/.profile . mkdir .hepix; cp ~ahaas/.hepix/* .hepix/ echo "none" > ATLCURRENT mkdir reldirs cp -r ~ahaas/cmthome . bash cd cmthome source /afs/slac.stanford.edu/g/atlas/c/CMT/v1r20p20090520/mgr/setup.sh cmt config |
The "cmthome" directory contains the all-important "requirements" file, which defines the CMT environment you're in,
see it below in "Bonus material".
You may want to have an area with >500MB of storage space (the /afs home limit).
If you're in group "atlas" (check with "groups", otherwise mail young@slac and he'll do "ypgroup adduser -group atlas -user username"):
Code Block |
---|
mkdir /afs/slac.stanford.edu/g/atlas/work/<firstLetterOfUsername>/<username> ln -s /afs/slac.stanford.edu/g/atlas/work/<firstLetterOfUsername>/<username> nfs |
Otherwise you have to use /scratch areas on the machines...
Code Block |
---|
mkdir /scratch/<username> ln -s /scratch/<username> scratch |
More info on ATLAS disk space at SLAC is here.
Everytime you log in and want to use an ATLAS release (15.2.0.1 is used for this example, but 15.5.1 is a more recent release...):
Code Block |
---|
bash #this is the supported shell for ATLAS work at SLAC mkdir ~/reldirs/15.2.0 #if it doesn't already exist (where 15.2.0 is the first 3 numbers of the release) cd reldirs/15.2.0 . ~/cmthome/setup.sh -tag=15.2.0.1 |
Note: for 16.0.0+ you should (and may have to) use:
No Format |
---|
asetup 16.5.0.1 #(or whatever the release is...) |
The releases available at SLAC can be seen here.
Now you can run athena, for example:
Code Block |
---|
get_files -jo HelloWorldOptions.py athena.py HelloWorldOptions.py > ~/scratch/hello.log #Check out code for skeleton AOD analysis: atladdpkg PhysicsAnalysis/AnalysisCommon/UserAnalysis #To get a particular version of the package, other than what's in the current release: #cmt co -r UserAnalysis-00-13-17 PhysicsAnalysis/AnalysisCommon/UserAnalysis #If no ATLAS access yet: #cp -r ~ahaas/reldirs/15.3.1/PhysicsAnalysis . #build package cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt; make; cd ../run |
You should be able to run anything from the CERN computing workbook, software workbook, and physics workbook.
You also are also ready to use the GRID easily, see instructions here.
Here are lots of handy tricks for getting things done (at SLAC) with ATLAS computing / analysis work.
Make use of the US ATLAS Analysis Support Centers, including their analysis tutorials.
Our old static page has some possibly still relevant but perhaps out of date info.
And there were many good talks at the 2009 WT2 users' forum workshop.
Bonus material
The default "requirements" file:
Code Block |
---|
set CMTSITE STANDALONE set SITEROOT /afs/slac/g/atlas/b/ #set DBRELEASE_OVERRIDE 7.1.1 macro ATLAS_DIST_AREA ${SITEROOT} macro ATLAS_TEST_AREA ${HOME}/reldirs apply_tag setup apply_tag simpleTest use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA) |