You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

AFS Home Directory

By default you get a fairly small AFS quota on your home directory.

You can use the AFS Quota Self-service Form to increase your total AFS quota up to 10GB. If you run into questions, please contact the unix-admin

NFS filesystems

Dedicated ATLAS NFS space is available. NFS are good for log, batch job input and output, and scratch. These NFS spaces are not back up. So code are recommended to remain on AFS space.

  • /afs/slac.stanford.edu/g/atlas/work/<firstLetterOfUsername>/<username>. (It is actually an NFS disk space)
  • /nfs/slac/g/atlas/u01
  • /nfs/slac/g/atlas/u02

These spaces are not automatically cleaned up.

Xrootd spaces

There are two xrootd spaces - The Tier 2 space and the Tier 3 space. These space are primarily for storing ROOT files, any type (POOL, ntuples, whatever). xrootd offers high performance access to these types of files.

Tier 2 space

The Tier 2 space is for Tier 2 only so users should not write to it. However, SLAC ATLAS users are encouraged to use R2D2 to transfer official ATLAS datasets to the WT2 (the SLAC Tier 2) spaces (USERDISK, GROUPDISK, LOCALGROUPDISK, SCRATCHDISK). Once they are at the Tier 2 storage, they are accessible (readonly of course) from all SLAC interactive nodes and batch nodes.

  • On atlint0[1-4] these space mounted at /xrootd/atlas/
  • In your batch jobs, you can access them via the xrootd protocol (explained below). Files in xrootd have a URL-type format. The accessing URL is root://atl-xrdr//atlas/xrootd/... (change from /xrootd/atlas/XXX above to /atlas/xroot/XXX)

Tier 3 space

SLAC ATLAS groups owns a proof cluster. It is actually a proof cluster, a batch cluster and a xrootd storage cluster. The xrootd storage cluster (the Tier 3 space) is are mounted at /atlas on atlint0[1-4].

  • You can use R2D2 to transfer official ATLAS datasets to the space under directory /atlas/dq2. The Tier 3's name in R2D2 is SLAC-ATLAS-T3_GRIDFTP.
  • Files under /atlas/proof and /atlas/output are used by the proof jobs.
  • Filles under /atlas/local are for users to read and write.
  • In your batch jobs, you can access them via the xrootd protocol (explained below). The accessing URL is root://atlprf01:11094//atlas/...

Access the xrootd spaces

To read input data in batch jobs, we recommend either directly read the ROOT files using the xrootd protocol, or by copying files (ROOT file and non-ROOT file) via xrdcp to batch nodes' /scratch space.

For output files, write the output to batch node's /scratch space first. At the end of your batch job, use "cp" to copy them back to the NFS space or "xrdcp" to copy them to the Tier 3 xrootd space.

To read an xrootd files in Athena using the xrootd protocol:

filelist = []
filelist += ["root://atl-xrdr//atlas/xrootd/atlasuserdisk/data09_cos/DPD_CALOCOMM/r733_p37/data09_cos.00121416.physics_L1Calo.merge.DPD_CALOCOMM.r733_p37_tid073973/DPD_CALOCOMM.073973._000046.pool.root.1"]
filelist += ["root://atl-xrdr//atlas/xrootd/atlasuserdisk/data09_cos/DPD_CALOCOMM/r733_p37/data09_cos.00121416.physics_L1Calo.merge.DPD_CALOCOMM.r733_p37_tid073973/DPD_CALOCOMM.073973._000016.pool.root.1"]
...
athenaCommonFlags.PoolESDInput=filelist

To read an xrootd file in ROOT:

root[0] TXNetFile f("root://atl-xrdr//atlas/xrootd/usr/g/gowdy/myFile.root");

To copy files in and out of xrootd:

xrdcp myFile.root root://atl-xrdr//atlas/xrootd/usr/g/gowdy/

xrdcp root://atlprf01:11094//atlas/local/myFile.root .

Filesystem type access only available on atlint0[1-4]. 

$ ssh atlint01.slac.stanford.edu
$ df -h /atlas /xrootd/atlas
Filesystem      Size  Used Avail Use% Mounted on
xrootdfs        741T  682T   60T  93% /atlas
xrootdfs        2.8P  1.9P  893T  69% /xrootd/atlas
$ cat /atlas/dq2/site-size

$ rucio download ...

atlint03 has a relatively old version of xrootd. this version of xrootd has slow memory leak issues when used via the filesystem type access. So avoid using atlint03 for large number of filesystem type access.

  • No labels