Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info
titleDisk space
  • Your home directory is in weka (/sdf/home/<first letter of your userid>/<your userid>) with 30 GB of space. This space is backed up and is where code, etc., should go. 
  • We have group space at /sdf/group/fermi/:
    • some directories are under /sdf/data/fermi/, but we provide links into the group directory tree for easier access
    • includes shared software, including conda envs for Fermitools and containers for running rhel6 executables
    • Fermi-supplied user (i.e., on top of your home directory) space.
      • You can find it in /sdf/group/fermi/u/<you>. There is a symlink to it, called "fermi-user", in your home directory for convenience.
      • after gpfs is retired in late 2023, this is where your larger user space will be.
    • group space in /sdf/group/fermi/g/ - a one-time copy has been done of all the gpfs g/ directories, under /nfs/farm/g/glast/g/.
    • all of glast afs has been copied to /sdf/group/fermi/a/
    • the nfs u<xx> partitions were copied to /sdf/group/fermi/n/ (including u52 which contains the GlastRelease rhel6 builds)
  • your user/group space on the old clusters is not directly accessible from s3df - it currently needs to be copied over. (this access policy may get reversed soon)
    • We're still providing additional user space from the old cluster, available on request via the slac-helplist mailing list. It is not backed up. This space is natively gpfs.  User directories are available under: /gpfs/slac/fermi/fs2/u/<your_dir>.
    • During the transition, read-only mounts of afs and gpfs are available on the interactive nodes (not batch!).
      • afs is just the normal afs path, eg to your home directory (/afs/slac/u/ ...) - you may need to issue "aklog" to get an afs token.
      • gpfs is /fs/gpfs/slac/fermi/fs2/u/ ...
  • Scratch space:
    • /sdf/scratch/<username_initial>/<username>: quota 100GB/per user. The space is visible on all interactive and batch nodes. Old data will be purged when overall space is needed, even if your usages is under the quota
    • /lscratch: On each batch node, this is a local space.  It is shared by all users. You are encourage to create your own sub-dir when running your job, and clean up your space (to zero) at the end of your job. Debris left behind by jobs will be purged periodically. The size of the /lscratch are subject to change and please refer to the table in Slurm partition for info about their size.
Info
titleHandy urls

...