As of this writing, access to S3DF is still by invitation. You can self-register following instructions here.
This page gives a very brief introduction to SLAC's new S3DF (SLAC Shared Scientific Data Facility) cluster to help you get started. We assume you already have a Unix account and your main intent is to run the Fermitools/Fermipy. During the transition, issues are discussed in the #s3df-migration slack channel. You can also join the #help-sdf channel if you wish to see SLAC-wide discussion of S3DF issues.
See the main S3DF documentation for detailed information about how to log in, use the SLURM batch system, and so on. Specify --account fermi:users.
Basically, ssh to s3dflogin.slac.stanford.edu and from there ssh to fermi-devl (no .slac.stanford.edu; it is a load balancer, but there is only one node so far) to do actual interactive work. The login nodes are not meant for doing analysis or accessing data. Of course, real computational intensive tasks are meant for the batch system and not the interactive nodes either. Send email to s3df-help at slac.stanford.edu for issues.
You can modify your .ssh config to allow direct passwordless access from your device to fermi-devl, by adding this to your .ssh/config file on your end: Host slac* User <you> Host slacl Hostname s3dflogin.slac.stanford.edu Host slacd Hostname fermi-devl ProxyJump slacl and then add your e.g. ssh-copy-id <you>@s3dflogin.slac.stanford.edu |
.bashrc:
# SLAC S3DF - source all files under ~/.profile.d |
|
|
If you were working on SDF, note that S3DF is completely separate (aside from the account name). Even though path names might look similar, they are on different file systems. You can still access all your SDF files by prepending "/fs/ddn/" to the paths you were used to. |
Fermitools and other analysis software (e.g., 3ML) are available via shared Conda installation, so you don't need to install Conda yourself. See Fermitools/Conda Shared Installation at SLAC. If you do want your own Conda, you shouldn't install it in your home directory due to quota limits; put it in your Fermi-supplied user space. Follow the S3DF documentation instructions to install Conda and set a prefix path for the Conda installation that will put it and any environments you create in your group-provided space. However, you should use a prefix to your personal space, e.g., /sdf/group/fermi/u/$USER/miniconda3, instead of the path in their example. You can also run a RHEL6 Singularity container (for apps that are not portable to RHEL/Centos7). See Using RHEL6 Singularity Container. |
For generic advice on running in batch, see Running on SLAC Central Linux. Note that the actual batch system has changed and we have not updated the doc to reflect that. This is advice on copying data to local scratch, etc.
You need to specify an account and "repo" on your slurm submissions. The repos allow subdivision of our allocation to different uses. There are 4 repos available under the fermi account. The format is "–-account fermi:<repo>" where repo is one of:
L1 and other-pipelines are restricted to known pipelines. Non-default repos have quality of service (qos) defaulting to normal (non-pre-emptible). At time of writing, there is no accounting yet. When that is enabled, we'll have to decide how to split up our allocation into the various repos. |
You can run cronjobs in S3DF. Users don't have to worry about token expiration like on AFS. Select one of the iana interactive nodes (and remember which one!) to run on. Note: crontab does NOT inherit your environment. You'll need to set that up yourself. Since crontab is per host (no trscrontab), if the node is reinstalled or removed, the crontab will be lost. It's probably best to save your crontab as a file in your home directory so that you can re-add your cronjobs if this happens: crontab -l > ~/crontab.backup Then to re-add the jobs back in: crontab ~/crontab.backup |
xrootd CLI commands are available via cvmfs: module load osg/client-latest which xrdcp module unload osg/client-latest #when done with xrootd |
Oracle drivers etc have been installed in /sdf/group/fermi/sw/oracle/. The setup.sh file in the driver-version directory sets up everything needed to issue sqlplus commands from the command line. |
/sdf/home/g/glast/a/datacat/prod/datacat /sdf/home/g/glast/a/pipeline-II/prod/pipeline |
For now, we are leaving the live cvs repo on nfs. The cvs client has been installed on the iana nodes. Set: CVSROOT=:ext:$<USER>@centaurusa.slac.stanford.edu:/nfs/slac/g/glast/ground/cvs |