Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Description in progress as of 2023-01-27.

This page gives Here is a very brief introduction to SLAC's new S3DF (SLAC Shared Scientific Data Facility) cluster Description in progress as of 2023-11-16.This assumes to help you get started.  We assume you already have a unix Unix account created and your main intent is to run the Science ToolsFermitools/Fermipy.

See s3df.slac.stanford.edu for documentation on logging in, the slurm batch system the main S3DF documentation for detailed information about how to log in, use the the SLURM batch system, and so on. 

Basically, ssh to s3dflogin.slac.stanford.edu and then from there ssh to iana (no .slac.stanford.edu) to do actual interactive work.  The login nodes are not meant for doing analysis or accessing data. Of course, real computational intensive tasks are meant for the batch system and not the interactive nodes either.

Disk space:

  • your Your home directory is in weka (/sdf/home/<fyour first letter>/<you><first letter of our userid>/<your userid>) with 30 GB of space. This space is backed up and is where code, etc., should go. This is also true for conda environments.
  • We have group space at /sdf/group/fermi/ - this which will include shared software, as well as Fermi-supplied user (ie i.e., on top of your home directory) and group space.
  • We're still providing additional user space from the old cluster, available on request via the slac-helplist mailing list. It is not backed up. This space is natively gpfsgpfs.  Once enabled, it will be available under: /gpfs/slac/fermi/fs2/u/<your_dir>.

Access to Science Tools installs Fermitools installs are available (note that this also provides a conda Conda installation so you don't need to install conda Conda yourself)

  • link

.  See Fermitools/Conda Shared Installation at SLAC.

You can also run Running in a RHEL6 Singularity container (for apps that are not portable to RHEL/Centos7)

Links:

. See Using RHEL6 Singularity Container.

For generic advice on running in batch, see Running on SLAC Central Linux (note: this is generic advice to running in batch since .  Note that the actual batch system has changed and we have not updated the doc to reflect that. This is advice on copying data to local scratch, etc).