Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

These experimental computing nodes have relatively little memory and no local disk. Please use the following guidelines when submitting jobs:

  • don't run Jacapo/Dacapo on suncat4, as those rely fairly heavily on a local disk, which isn't present on these nodes.
  • if you exceed the 2GB/core memory limit, the node will crash. planewave codes (espresso, dacapo/jacapo, vasp) use less memory. If you use GPAW make sure you check the memory estimate before submitting your job. Here's some experience from Charlie Tsai on what espresso jobs can fit into a node:
    Code Block
    For the systems I'm working with approximately 2x4x4 (a support that's 2x4x3, catalyst
    is one more layer on top) is about as big a system as I can get without running out of
    memory. For spin-polarized calculations, the largest system I was able to do was about
    2x4x3 (one 2x4x1 support and two layers of catalysts).
    
  • you can observe the memory usage of the nodes for your job with "lsload psanacs002" (if your job uses node "psanacs002"). The last column shows the free memory.
  • if you run espresso, you must use the following options, since there is no local disk:
    Code Block
    output = {'avoidio':True,
              'removewf':True,
              'wf_collect':False},
    
  • use the same job submission commands that you would use for suncat/suncat2
  • use queue name "suncat4-long"
  • the "-N" batch option (to receive email on job completion) does not work