Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info
titlepasswordless ssh to iana

You can modify your .ssh config to allow direct passwordless access from your device to iana, by adding this to your .ssh/config file on your end:

Host slac*
        User <you>

Host slacl
        Hostname s3dflogin.slac.stanford.edu

Host slacd
        Hostname iana
        ProxyJump slacl

and then add your e.g. ~/.ssh/id_rsa.pub from from your device to ~/.ssh/authorized_keys at SLAC, using:

ssh-copy-id <you>@s3dflogin.slac.stanford.edu



Info
title.bash_profile

.bashrc:

  • a directory gets added to your home dir, called profile_d. It points back to the group equivalent in /sdf/group/fermi/sw/ and includes the contents of those conf files into your session's bashrc. Group-level settings go there, eg $LATCalibRoot.
  • don't overwrite your .bash_profile or you'll lose the code that does this:

...

.bash_profile snippet
Info
title

# SLAC S3DF - source all files under ~/.profile.d
if [[ -e ~/.profile.d && -n "$(ls -A ~/.profile.d/)" ]]; then
 source <(cat $(find -L ~/.profile.d -name '*.conf'))
fi

...

Info
titleSoftware and Containers

Fermitools and other analysis software (e.g., 3ML) are available via shared Conda installation, so you don't need to install Conda yourself. See Fermitools/Conda Shared Installation at SLAC.  If you do want your own Conda, you shouldn't install it in your home directory due to quota limits; put it in your Fermi-supplied user space. Follow the S3DF documentation instructions to install Conda and set a prefix path for the Conda installation that will put it and any environments you create in your group-provided space. However, you should use a prefix to your personal space, e.g., /sdf/group/fermi/u/$USER/miniconda3, instead of the path in their example.

You can also run a RHEL6 Singularity container (for apps that are not portable to RHEL/Centos7). See Using RHEL6 Singularity Container.

Info
titleSlurm Batch Usage

For generic advice on running in batch, see Running on SLAC Central Linux.  Note that the actual batch system has changed and we have not updated the doc to reflect that. This is advice on copying data to local scratch, etc.

  • LSB_JOBID -> SLURM_JOB_ID
  • scratch space during job execution:
    • at job start, a directory is automatically created on the scratch of the worker: ${LSCRATCH} = /lscratch/${USER}/slurm_job_id_${SLURM_JOB_ID}
    • once all of a user's jobs on a node are completed/exited, their corresponding LSCRATCH directory on that host is deleted.

...