...
By default you get a fairly small AFS quota on your home directory.
You can ask unix-admin
to increase it to 500MB or for other volumes to be added to your home directory with a quota of 500MB.SCCS has a form to make this request.use the AFS Quota Self-service Form to increase your total AFS quota up to 10GB. If you run into questions, please contact the unix-admin
NFS filesystems
Dedicated ATLAS scratch NFS space is available. NFS are good for log, batch job input and output, and scratch. These NFS spaces are not back up. So code are recommended to remain on AFS space.
You should create a directory for yourself with the format:/afs/slac.stanford.edu/g/atlas/work/<firstLetterOfUsername>/<username>.
(It is actually an NFS disk space
not backed up so that code are recommended to remain on your AFS space.
For the moment this space is not automatically cleaned up, if it fills up we'll need to start doing that.
Currently there are two 125GB volumes making up this space.
This space should be used for log files and other small files that don't need high performance access.
xrootd user space
)
/nfs/slac/g/atlas/u01
/nfs/slac/g/atlas/u02
These spaces are not automatically cleaned up.
Xrootd spaces
There are two xrootd spaces - The Tier 2 space and the Tier 3 space. These space are primarily for storing This space is primarily for storing ROOT files, any type (POOL, ntuples, whatever). xrootd offers high performance access to these types of files.
The Tier 2 space is for Tier 2 only so users should not write to it. However, SLAC ATLAS users are encouraged to use R2D2 to transfer official ATLAS datasets to the Tier 2 spaces (USERDISK, GROUPDISK, LOCALGROUPDISK, SCRATCHDISK). Once they are at SLAC Tier 2 (called Western Tier 2, or WT2) storage, they are accessible (readonly of course) from all SLAC interactive nodes and batch nodes.
On atlint01,02,03,04 these space mounted at /xrootd/atlas. In your batch jobs, you can access them via the xrootd protocol (explained below), the accessing URL is root://atl-xrdr//atlas/xrootd/...
Access to this area is either directly directly from ROOT or by copying files in or out via xrdcp.
The recommended access type is direct access for reading and to write via xrdcp
(write the output to a local disk on the machine your job is on, like /scratch on the batch workers).
Files in xrootd have a URL-type format. You can add them to a job and Athena knows how to use them. An example file would be:
Code Block |
---|
filelist = []
filelist += ["root://atl-xrdr//atlas/xrootd/atlasuserdisk/data09_cos/DPD_CALOCOMM/r733_p37/data09_cos.00121416.physics_L1Calo.merge.DPD_CALOCOMM.r733_p37_tid073973/DPD_CALOCOMM.073973._000046.pool.root.1"]
filelist += ["root://atl-xrdr//atlas/xrootd/atlasuserdisk/data09_cos/DPD_CALOCOMM/r733_p37/data09_cos.00121416.physics_L1Calo.merge.DPD_CALOCOMM.r733_p37_tid073973/DPD_CALOCOMM.073973._000016.pool.root.1"]
...
athenaCommonFlags.PoolESDInput=filelist
|
To access an xrootd file in ROOT directly, do:
Code Block |
---|
root[0] TXNetFile f("root://atl-xrdr//atlas/xrootd/usr/g/gowdy/myFile.root");
|
To copy files in and out of xrootd:
Code Block |
---|
xrdcp myFile.root root://atl-xrdr//atlas/xrootd/usr/g/gowdy/
xrdcp root://atl-xrdr//atlas/xrootd/usr/g/gowdy/myFile.root .
|
Filesystem type access is not transparent currently due to the distributed nature of the xrootd system. However, the xrootd filesystem is MOUNTED on atlint01. You can should make a directory called /xrootd/atlas/usr/"first letter of username"/"username"/
No Format |
---|
ssh atlint01.slac.stanford.edu
df -h
xrootdfs 9.8T 972G 8.9T 10% /xrootd/atlas/usr
xrootdfs 9.8T 972G 8.9T 10% /xrootd/atlas/dq2
xrootdfs 9.8T 972G 8.9T 10% /xrootd/atlas/atlasdatadisk
xrootdfs 9.8T 972G 8.9T 10% /xrootd/atlas/atlasmcdisk
xrootdfs 9.8T 972G 8.9T 10% /xrootd/atlas/atlasuserdisk
xrootdfs 9.8T 972G 8.9T 10% /xrootd/atlas/atlasgroupdisk
xrootdfs 9.8T 972G 8.9T 10% /xrootd/atlas/atlashotdisk
cat /xrootd/atlas/usr/a/ahaas/test.txt
|