Basics (updated 11-May-2018)
Please refer to the items under User Information in the U.S. ATLAS Center at SLAC page.
How
!!! UNDER CONSTRUCTION !!!
Register as a SLAC user and obtain a SLAC computing account.
Account info can be found on the Tier2 page, along with other info on local computing at SLAC.
Among the various public machines at SLAC, for ATLAS work you should ssh to either:
rhel4-32.slac.stanford.edu (a cluster of a few nodes good for ATLAS work)
atlint01.slac.stanford.edu (an ATLAS machine)
If you have any questions please post them to the Non-Grid Jobs at SLAC Forum in the ATLAS Hypernews system,
where all local SLAC ATLAS computing issues are discussed. (If you don't have an ATLAS account, please mail me.)
...
to get up and running quick
...
(This looks very old. Review and delete as appropriate.)
Code Block |
---|
#Do this just once: cd cp ~ahaas/.bashrc . cp ~ahaas/.profile . mkdir .hepix; cp ~ahaas/.hepix/* .hepix/ echo "none" > ATLCURRENT mkdir reldirs cp -r ~ahaas/cmthome . bash cd cmthome source /afs/slac.stanford.edu/g/atlas/c/CMT/v1r20p20090520/mgr/setup. sh cmt config |
The "cmthome" directory contains the all-important "requirements" file, which defines the CMT environment you're in,
see it below in "Bonus material".
You And you may want to have an area with >500MB of storage space (the /afs home limit).
If you're in group "atlas" and/or "atlas-user" (check with "groups")...", otherwise mail young@slac and he'll do "ypgroup adduser -group atlas -user username"):
Code Block |
---|
mkdir /afs/slac.stanford.edu/g/atlas/work/<firstLetterOfUsername>/<username>
ln -s /afs/slac.stanford.edu/g/atlas/work/<firstLetterOfUsername>/<username> nfs
|
Otherwise you have to use /scratch areas on the machines...
Code Block |
---|
mkdir /scratch/<username>
ln -s /scratch/<username> scratch
|
More info on ATLAS disk space at SLAC is here.
Everytime you log in and want to use an ATLAS release (15.2.0.1, for example):
Code Block |
---|
touch ~/.usecvmfs bash #this is the supported shell for ATLAS work at SLAC mkdir ~/reldirs/15.2.0 #if it doesn't already exist (where 15.2.0 is the first 3 numbers of the release) cd reldirs/15.2.0 . ~/cmthome/setup.sh -tag=15.2.0.1 setupATLAS asetup 17.2.7.4.1,64,AtlasPhysics,here,slc5 #to setup a particular release. |
Now you can run athena, for example:
Code Block |
---|
get_files -jo HelloWorldOptions.py athena.py HelloWorldOptions.py > ~/scratch/hello.log #Analysis#Check out skeletoncode for skeleton AOD analysis: cmt: atladdpkg PhysicsAnalysis/AnalysisCommon/UserAnalysis #To get a particular version of the package, other than what's in the current release: #cmt co -r UserAnalysis-00-13-17 PhysicsAnalysis/AnalysisCommon/UserAnalysis #If no ATLAS access yet: #cp cp -r ~ahaas/reldirs/15.3.1/PhysicsAnalysis . #build package cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt; make; cd - ../run |
More (and possibly updated) details on setup with cvmfs can be found at the beginning of the Software Basics section of the ATLAS Software tutorial.
You should be able to run anything from the CERN computing workbook, software workbook, and physics workbook.
You also are also ready to use the GRID easily, see instructions here.
Here are lots of handy tricks for getting things done (at SLAC) with ATLAS computing / analysis work.
Make use of the US ATLAS Analysis Support Centers, including their analysis tutorials.
Our old static page has some possibly still relevant but perhaps out of date info.
And there were many good talks at our the 2009 WT2 users' forum workshop.
Bonus material
The default "requirements" file:
Code Block |
---|
set CMTSITE STANDALONE set SITEROOT /afs/slac/g/atlas/b/ #set DBRELEASE_OVERRIDE 7.1.1 macro ATLAS_DIST_AREA ${SITEROOT} macro ATLAS_TEST_AREA ${HOME}/reldirs apply_tag setup apply_tag simpleTest use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA) |