Submitting Batch Jobs
The are the commands for ASE/python mode and the "native" (no ASE/python) mode:
esp-ver-bsub <version> myscript.py esp-ver-bsub-native <version> -q suncat-test -o my.log -n 8 pw.x -in pw.inp
k-point Parallelization
- k-point parallelization across nodes will not be as cpu-efficient as planewave parallelization within one node, so use it judiciously
- k-point parallelization is not as memory efficient as planewave parallelization, but it is supposed to scale better to more nodes (ask cpo if you want a better explanation)
- vossj and cpo have not yet seen good scaling behavior for the k-point parallelization, at least with small systems, so perhaps we're doing something wrong
- to turn on k-point parallelization:
- for ase mode: add parameter "parflags='-npool 2'" to the espresso object. This is a general-purpose string for passing run-time options to espresso executables.
- for native mode: add something like "-npool 2" at the end of the line
- an example for 16 cores (2 nodes) and npool=2: each of the 2 pools of 8 cores would parallelize over planewaves, but the 2 pools would process pairs of k-points in parallel.
- if you have done it correctly, you should see a line about "K-points division" in your espresso log file (the planewave parallelization produces a line like "R & G space division")
- there is a chicken-and-egg problem: to run your job one needs to know the number of reduced k-points (to determine npool) however one has to run the job to learn what this number is. a workaround for this would be to run it first in the test queue to learn the reduced number of k-points.
Example script:
#!/usr/bin/env python #LSF -q suncat-test -n 2 -o H.log -e H.err from ase import optimize from ase import Atoms from espresso import espresso a=Atoms('H2',[[0,0,0],[0.9,0,0]],cell=(3,3,3)) calc = espresso(pw=400,dw=4000,kpts=(1,1,1),nbands=-5,xc='BEEF') a.set_calculator(calc) qn = optimize.QuasiNewton(a,trajectory='relax.traj') qn.run(fmax=0.01)
Versions
Version |
Date |
Comment |
1 |
12/3/2012 |
initial version |
2 |
12/5/2012 |
use mkl fftw |
3 |
12/7/2012 |
UNSTABLE version: developers allowed to change espresso.py. Users can overwride espresso.py by putting their own espresso.py in directory $HOME/espresso |
4,4a |
12/10/2012 |
update to the latest svn espresso-src and espresso python |
5 |
2/14/2013 |
Entropy corrections added and default parameters changed (smearing type and width) |
6,6a |
3/7/2013 |
Many changes: move to combination of dacapo/espresso pseudo potentials (previously just dacapo), add spin polarized BEEF |
7,7a |
4/5/2013 |
Update the python interface for bug fixes. Numbers shouldn't change from v6 |
8,8a |
4/5/2013 |
More bug fixes, in particular no need for calc.stop() and support for kpoint parallelization with ASE. Numbers shouldn't change from v6/v7. |
SUNCAT Quantum Espresso Talks
Introduction/Usage (Johannes Voss): jvexternal.pdf
Accuracy (Jewe Wellendorff, Keld Lundgaard, NOTE: password protected because it contains VASP benchmark data): kelu.pdf
Speed/Convergence (AJ Medford): aj.pptx
Scaling behavior (Christopher O'Grady): espscaling.pptx
Espresso ASE To-Do List
- neb
- constraints interface (needed for neb)
- dos
- bandgaps
- separation of site-specific code from ASE code (including site-specific "scratch")
- make beef errors accessible from ASE
- beef self-tests integrated with espresso self-tests
- become part of ASE svn (need to follow new ASE guidelines)
- dry-run mode to get memory estimate
- support kpoint parallelization (and others, e.g. scalapack)
- record uspp and executable directory in output (and/or svn version, somehow?)
- documentation/examples (including on ASE website)
- eliminate need for calc.stop() with multiple calculations
- get work function without dumping out the electrostatic cube file? (chuan has tools for this)
- dipole correction goes in the middle of unit cell by default (in python, chuan makes sure it goes in the biggest gap)