Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info
titlepasswordless ssh to fermi-devl

You can modify your .ssh config to allow direct passwordless access from your device to fermi-devl, by adding this to your .ssh/config file on your end:

Host slac*
        User <you>

Host slacl
        Hostname s3dflogin.slac.stanford.edu

Host slacd
        Hostname fermi-devl
        ProxyJump slacl

and then add your e.g. ~/.ssh/id_rsa.pub from from your device to ~/.ssh/authorized_keys at SLAC, using:

ssh-copy-id <you>@s3dflogin.slac.stanford.edu


For those using the cvs server on centaurusa outside from slac, you have to add a proxyjump for centaurusa. Since the cvs server is written in all CVS/Root files of a cvs package, you have to use the following solution:

Host centaurusa.slac.stanford.edu
        Hostname centaurusa
        ProxyJump slacl
Info
titleShared accounts

To access shared accounts (e.g., glast, glastraw), you will likely need to get a token by running kinit first.  Then, you can ssh or ksu to the account.  You must also be listed in the .k5login file in the home directory for the account.  Anyone who already has access the account can add you to this file.

Info
title.bash_profile

.bashrc:

  • a directory gets added to your home dir, called profile_d. It points back to the group equivalent in /sdf/group/fermi/sw/ and includes the contents of those conf files into your session's bashrc. Group-level settings go there, eg $LATCalibRoot.
  • don't overwrite your .bash_profile or you'll lose the code that does this:

# SLAC S3DF - source all files under ~/.profile.d
if [[ -e ~/.profile.d && -n "$(ls -A ~/.profile.d/)" ]]; then
 source <(cat $(find -L ~/.profile.d -name '*.conf'))
fi

Info
titleDisk space
  • Your home directory is in weka (/sdf/home/<first letter of your userid>/<your userid>) with 30 GB of space. This space is backed up and is where code, etc., should go. 
  • We have group space at /sdf/group/fermi/:
    • some directories are under /sdf/data/fermi/, but we provide links into the group directory tree for easier access
    • includes shared software, including conda envs for Fermitools and containers for running rhel6 executables
    • Fermi-supplied user (i.e., on top of your home directory) space.
      • You can find it in /sdf/group/fermi/u/<you>. There is a symlink to it, called "fermi-user", in your home directory for convenience.
      • after gpfs is retired in late 2023, this is where your larger user space will be.
    • group space in /sdf/group/fermi/g/ - a one-time copy has been done of all the gpfs g/ directories, under /nfs/farm/g/glast/g/.
    • all of glast afs has been copied to /sdf/group/fermi/a/
    • Fermi web space (/afs/slac/www/exp/glast) has been copied to /sdf/data/fermi/afs-www/
    • the nfs u<xx> partitions were copied to /sdf/group/fermi/n/ (including u52 which contains the GlastRelease rhel6 builds)
  • your user/group space on the old clusters is not directly accessible from s3df 
    • We're still providing additional user space from the old cluster, available on request via the slac-helplist mailing list. It is not backed up. This space is natively gpfs.  User directories are available under: /gpfs/slac/fermi/fs2/u/<your_dir>.
      • a read-only copy of all user directories on /nfs/farm/g/glast/u was made in mid-January 2024 and can be found at /sdf/data/fermi/gpfs-u/
    • During the transition, read-only mounts of afs and gpfs are available on the interactive nodes (not batch!).
      • afs is just the normal afs path, eg to your home directory (/afs/slac/u/ ...) - you may need to issue "aklog" to get an afs token.
      • gpfs is /fs/gpfs/slac/fermi/fs2/u/ ...
  • Scratch space:
    • /sdf/scratch/<username_initial>/<username>: quota 100GB/per user. The space is visible on all interactive and batch nodes. Old data will be purged when overall space is needed, even if your usages is under the quota
    • /lscratch: On each batch node, this is a local space.  It is shared by all users. You are encourage to create your own sub-dir when running your job, and clean up your space (to zero) at the end of your job. Debris left behind by jobs will be purged periodically. The size of the /lscratch are subject to change and please refer to the table in Slurm partition for info about their size.

...