Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To be able to use the commands to submit batch jobs, add the following 2 lines to your .login file:

Code Block

source /afs/slac/g/suncat/gpaw/setupenv
setenv PATH ${PATH}:/afs/slac/g/suncat/bin:/usr/local/bin

...

If you want to use a particular version (e.g. 27) of GPAW instead of the "default" above, use something like this instead:

Code Block

source /nfs/slac/g/suncatfs/sw/gpawv27/setupenv

...

Farm Name

Cores (or GPUs)

Cores (or GPUs) Per Node

Memory Per Core (or GPU)

Interconnect

Cost Factor

Notes

suncat

2272 Nehalem X5550

8

3GB

1Gbit Ethernet

1.0

 

suncat2

768 Westmere X5650

12

4GB

2Gbit Ethernet

1.1

 

suncat3

512 Sandy Bridge E5-2670

16

4GB

40Gbit QDR Infiniband

1.8

 

suncat4

1024 Sandy Bridge E5-2680

16

2GB

1Gbit Ethernet

1.5 NO LOCAL DISK


gpu

119 Nvidia M2090

7

6GB

40Gbit QDR Infiniband

N/A

 

...

Login to a suncat login server (suncatls1,suncatls2 all @slac.stanford.edu) to execute commands like these (notice they are similar for gpaw/dacapo/jacapo):

Code Block

gpaw-bsub -o mo2n.log -q suncat-long -n 8 mo2n.py
dacapo-bsub -o Al-fcc-single.log -q suncat-long -n 8 Al-fcc-single.py
jacapo-bsub -o Al-fcc-single.log -q suncat-long -n 8 co.py

...

You can select a particular version to run (documented on the appropriate calculators page):

Code Block

gpaw-ver-bsub 19 -o mo2n.log -q suncat-long -n 8 mo2n.py

You can also embed the job submission flags in your .py file with line(s) like:

Code Block

#LSF -o mo2n.log -q suncat-long
#LSF -n 8

...

Login to a suncat login server (suncatls1,suncatls2) to execute these. You can get more information about these commands from unix man pages.

Code Block

bjobs (shows your current list of batch jobs and jobIds)
bjobs -d (shows list of your recently completed batch jobs)
bqueues suncat-long (shows number of cores pending and running)
bjobs -u all | grep suncat (show jobs of all users in the suncat queues)
bpeek <jobId> (examine logfile output from job that may not have been flushed to disk)
bkill <jobId> (kill job)
btop <jobId> (moves job priority to the top)
bbot <jobId> (moves job priority to the bottom)
bsub -w "ended\(12345\)" (wait for job id 12345 to be EXITed or DONE before running)
bmod [options] <jobId> (modify job parameters after submission, e.g. priority (using -sp flag))
bswitch suncat-xlong 12345 (move running job id 12345 to the suncat-xlong queue)
bmod -n 12 12345 (change number of cores or pending job 12345 to 12)
bqueues -r suncat-long (shows each user's current priority, number of running cores, CPU time used)
bqueues | grep suncat (allows you to see how many pending jobs each queue has)

...

These experimental computing nodes have relatively little memory and no local disk. Please use the following guidelines when submitting jobs:

  • don't run Jacapo/Dacapo on suncat4, as those rely fairly heavily on a local disk, which isn't present on these nodes.
  • if you exceed the 2GB/core memory limit, the node will crash. planewave codes (espresso, dacapo/jacapo, vasp) use less memory. If you use GPAW make sure you check the memory estimatebefore submitting your job. Here's some experience from Charlie Tsai on what espresso jobs can fit into a node:

    Code Block
    
    For the systems I'm working with approximately 2x4x4 (a support that's 2x4x3, catalyst
    is one more layer on top) is about as big a system as I can get without running out of
    memory. For spin-polarized calculations, the largest system I was able to do was about
    2x4x3 (one 2x4x1 support and two layers of catalysts).
    
  • you can observe the memory usage of the nodes for your job with "lsload psanacs002" (if your job uses node "psanacs002"). The last column shows the free memory.

    if you run espresso, you must use the following options, since there is no local disk: Code Block output = {'avoidio':True, 'removewf':True, 'wf_collect':False},

  • use the same job submission commands that you would use for suncat/suncat2
  • use queue name "suncat4-long"
  • the "-N" batch option (to receive email on job completion) does not work