The SLAC theory group currently has 3 computing nodes that can be used by any member of the theory group.
The nodes are maintained by the SLAC IT and provide a linux infrastructure.


First, make sure that a SLAC UNIX user account has been created for you. This is done by SLAC IT. Please note that the UNIX accounts are different from the SLAC Windows (or Exchange) accounts that you use to read SLAC email, etc.

In order to gain access to the machines your SLAC UNIX user account must be added to the netgroup "u-theory". To be able to access the Theory Group disk space, you will need to be added to the ypgroup "theorygrp".
The easiest way is to ask Alex to do both of these steps for you. These changes may take ~24-72 hours to become effective. Alternatively, you may send a ticket to SLAC IT and request to be added to both groups in order to access the machines.




Access via ssh is possible from within the SLAC network. To connect from outside, use VPN (details here).
To connect from a terminal use, for example,

    ssh -Y

Use the -Y option if you'd like to enable X11 forwarding – this will allow you to open remote windows. Make sure you have an X11 client installed on your local system. For example, on a Mac, this would be XQuartz.

Alternatively, ssh into one of the SLAC gateway machines. These are accessible without VPN. For example,


Once there, ssh to the theory machines,

    ssh epp-theory01
ssh epp-theory02
ssh epp-theory03

More details are available here.

SSH via proxyJump 

In order to ssh directly to an epp-theory machine from outside the SLAC network without having to do the two-step procedure explained above, one can setup a proxyJump through the gateway machines.
To do so, edit (or create if not present) the ssh configuration file, which is usually found in .ssh/config on a Linux machine, on the computer from which you want to connect and add the following lines

    Host slacEPP1
User your-slac-username

Once that is done, you can connect to epp-theory01 by simply issuing the command

    ssh your-slac-username@slacEPP1

even if you are outside the SLAC network and you don't have the VPN active.

From an authentication point of view this will work on any machine that can ssh to

If you want to ssh also to epp-theory02 or epp-theory03 you can add multiple blocks like the one above to your config file, just change the host and the hostname entries accordingly.

Note that since ssh is at the basis of other secure commands like scp, this setup allows one to copy to the epp-theory machines with the simple

    scp local-file-to-copy username@slacEPP1:path-to-destination

Storage Space

In addition to your AFS home directory, we have local drives in our theory machines.
This local drives are mounted on the machines at

  • /nfs/farm/g/theory/u1
  • /nfs/farm/g/theory/u2
  • /nfs/farm/g/theory/u3

and provide 22TB, 14TB and 27TB of disk space respectively.
Make sure that enough space is available for your files when writing to these drives, for example by using

    df -h

When you log out of the computing nodes, your writing privileges on AFS will be revoked but you will still be able to write onto these disks.


The machines already have a lot of handy software installed. This includes many software packages available via the SLAC AFS system, /afs/  See here for details.

For example, the Intel compilers are found in /afs/

Likewise, Maple releases live in /afs/ Maple 2018 binaries are linked to /usr/local/bin, so typing "maple" or "xmaple" should bring them up, provided you have /usr/local/bin in your PATH.

Mathematica for LINUX is also installed, version 12.3.1 as of this writing. So long as you have /usr/local/bin/ in your PATH, both “mathematica” and “math” (invoking the kernel through the command line) should work.

Special software can be installed globally on our machines - contact SLAC IT.


The machines are used in a shared fashion - please respect the computing needs of others that might be using them at the same time.
Check how much memory and CPU capacity is available for example by using the command


To run programs for extended periods of time or when disconnected from the computing nodes use programs like screen.

Likewise, be mindful of the shared disk space on u1, u2, and u3. To be flexible, we do not impose per-user quotas, but please make sure your output doesn't fill the disks, which will make it impossible for anyone in the group to use them.

  • No labels