You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Here is a very brief introduction to SLAC's new S3DF cluster.  Description in progress as of 2023-11-16.

This assumes you already have a unix account created and your main intent is to run the Science Tools.


See s3df.slac.stanford.edu for documentation on logging in, the slurm batch system and so on. 

Basically, ssh to s3dflogin.slac.stanford.edu and then ssh to iana (no .slac.stanford.edu) to do actual interactive work.

Disk space:

  • your home directory is in weka (/sdf/home/<fyour first letter>/<you>) with 30 GB of space. This space is backed up and is where code etc should go. This is also true for conda environments.
  • We have group space at /sdf/group/fermi/ - this will include shared software, as well as Fermi-supplied user (ie on top of your home directory) and group space.
  • We're still providing additional user space from the old cluster, available on request via the slac-helplist mailing list. It is not backed up. This space is natively gpfs
    • gpfs: /gpfs/slac/fermi/fs2/u/<your_dir>

Access to Science Tools installs (note that this also provides a conda installation so you don't need to install conda yourself)

  • link

Running in a RHEL6 Singularity container (for apps that are not portable to RHEL/Centos7)

Links:

Running on SLAC Central Linux (note: this is generic advice to running in batch since the actual batch system has changed and we have not updated the doc to reflect that. This is advice on copying data to local scratch etc).

  • No labels