...
This page gives a very brief introduction to SLAC's new S3DF (SLAC Shared Scientific Data Facility) cluster to help you get started. Note that, as of April 15 2024, onboarding involves automatic creation of both Active Directory ("windows") and unix accounts. The AD account is only useful for cyber training and logging into the Service Now ticket web portal.
For what follows, we We assume you already have a Unix account and your main intent is to run the Fermitools/Fermipy. During the transition, issues are discussed in the #s3df-migration slack channel. You can also join the #help-sdf channel if you wish to see SLAC-wide discussion of S3DF issues.
...
Basically, ssh to s3dflogin.slac.stanford.edu and from there ssh to fermi-devl (no .slac.stanford.edu; it is a load balancer, but there is only one node so far) to do actual interactive work. The login nodes are not meant for doing analysis or accessing data. Of course, real computational intensive tasks are meant for the batch system and not the interactive nodes either. Send email to s3df-help at slac.stanford.edu for issues.
Info | ||
---|---|---|
| ||
Unix Password: SLAC currently requires a password change every 6 months. You can use https://unix-password.slac.stanford.edu/ to do this. Cyber Training Cyber training comes up annually. If you have an Active Directory (aka Windows) account, just follow the links. There are issues with the training system at the moment if you only have a unix account, so here is (hopefully) temporary advice on how to navigate it (note that if you got an email saying your training is coming due, the SLAC ID (SID) is embedded in the url in the email - that is the xxxxxxx in the instructions below - if your account has not been disabled, you can ssh to rhel6-64 and issue the command: res list user <your unix account name> which will give your SID (along with your account status). if none of that works, ask your SLAC Point of Contact): You need to go to the url below; DO NOT click on forgot password. Give it your system id (SID) number (xxxxxxx). Note: the interim training password is "SLACtraining2005!". If it does not work, email slac-training, asking them to reset it. Then go back to the original link, enter SID and this password. Then do CS100. https://slactraining.csod.com/ Basically, always use the SID where "user name" is requested. |
Info | ||
---|---|---|
| ||
You can modify your .ssh config to allow direct passwordless access from your device to fermi-devl, by adding this to your .ssh/config file on your end: Host slac* User <you> Host slacl Hostname s3dflogin.slac.stanford.edu Host slacd Hostname fermi-devl ProxyJump slacl and then add your e.g. ssh-copy-id <you>@s3dflogin.slac.stanford.edu For those using the cvs server on centaurusa outside from slac, you have to add a proxyjump for centaurusa. Since the cvs server is written in all CVS/Root files of a cvs package, you have to use the following solution: Host centaurusa.slac.stanford.edu
Hostname centaurusa
ProxyJump slacl |
Info | ||
---|---|---|
| ||
.bashrc:
# SLAC S3DF - source all files under ~/.profile.d |
Info | ||
---|---|---|
| ||
|
Info | ||
---|---|---|
| ||
|
...
Info | ||
---|---|---|
| ||
For generic advice on running in batch, see Running on SLAC Central Linux. Note that the actual batch system has changed and we have not updated the doc to reflect that. This is advice on copying data to local scratch, etc. If you find you cannot submit jobs to the fermi:users repo, ask for access in the #s3df-migration slack channel.
You need to specify an account and "repo" on your slurm submissions. The repos allow subdivision of our allocation to different uses. There are 4 repos available under the fermi account. The format is "–-account fermi:<repo>" where repo is one of:
L1 and other-pipelines are restricted to known pipelines. Non-default repos have quality of service (qos) defaulting to normal (non-pre-emptible). At time of writing, there is no accounting yet. When that is enabled, we'll have to decide how to split up our allocation into the various repos. S3DF Slurm organizes the different hardware resource type under Slurm partitions. Slurm doesn't have the concept of batch queue. Users can specify the resource their job needs (because, for example a 12-core CPU request can be satisfied by different types of CPUs). The following is an example script that submits a job to Slurm: #!/bin/bash Note that the specifying "--gpus a100:1" option is preferred over the specifying "–partition=ampere" (the latter is not needed). If GPU is not requested, you job will not have access to a GPU even if it is landed on a an ampere node. |
Info | ||
---|---|---|
| ||
There is now a dedicated machine for cron: sdfcron001. Cron has been disabled on all other nodes. See: https://s3df.slac.stanford.edu/public/doc/#/service-compute?id=s3df-cron-tasks You might want to keep a backup of your crontab file You can run cronjobs in S3DF. Users don't have to worry about token expiration like on AFS. Select one of the iana interactive nodes (and remember which one!) to run on. Note: crontab does NOT inherit your environment. You'll need to set that up yourself. Since crontab is per host (no trscrontab), if the node is reinstalled or removed, the crontab will be lost. It's probably best to save your crontab as a file in your home directory so that you can re-add your cronjobs if this happens: crontab -l > ~/crontab.backup Then to re-add the jobs back in: crontab ~/crontab.backup |
...
Info | ||
---|---|---|
| ||
For now, we are leaving the live cvs repo on nfs. The cvs client has been installed on the iana nodes. Set: CVSROOT=:ext:$<USER>@centaurusa.slac.stanford.edu:/nfs/slac/g/glast/ground/cvs |
Info | ||
---|---|---|
| ||
Calibrations go to $LATCalibRoot=/sdf/group/fermi/ground/releases/calibrations/ Write access is controlled by the glast-calibs permissions group. the environment variable is set in the group profile. (Note: /sdf/data/fermi/a/ground/releases/calibrations is historical and not to be used) |
Info | ||
---|---|---|
| ||
If you manage any of the unix groups from the old NFS cluster (eg glast-catalog, glast-skywatch etc), maintenance is still only available from the rhel6-64 machines, using the ypgroup command. This will change once the legacy filesystems go away. |