GPAW Convergence Behavior

A talk given by Ansgar Schaefer studying convergence behaviour for rutiles is here (pdf).

General suggestions for helping GPAW convergence are here.

A discussion and suggestions for converging some simple systems can be found here.

Other convergence experience:

System

Who

Action

Graphene with vacancy

Felix Studt

Increase Fermi Temp from 0.1 to 0.2, use cg

Graphene with vacancy

Chris O'Grady

change nbands from -10 to -20, MixerDif(beta=0.03, nmaxold=5, weight=50.0)

Nitrogenase FeVCo for CO2 reduction

Lars Grabow

use Davidson solver (faster as well?), although later jvarley said MixerSum

Several surfaces

Andy Peterson

Broyden mixer with Beta=0.5

TiO2

Monica Garcia-Mota

MixerSum(0.05,6, 50.)

MnxOy

Monica Garcia-Mota

Broyden MixerSum

Co3O4

Monica Garcia-Mota

Davidson eigensolver, MixerSum(beta=0.05, nmaxold=5, weight=50) or MixerSum(beta=0.05, nmaxold=6, weight=100)

MnO2 with DFT+U U=+2eV

Monica Garcia-Mota

Marcin suggests we disable the DipoleCorrectionPoissonSolver (not yet tested)

MnO2 with DFT+U U=+2eV

Monica Garcia-Mota

Henrik Kristofferson suggests: convergence is easier with high U (U=4eV) and then
one can shift to preferred value

MnO2 with DFT+U U=+2eV

Monica Garcia-Mota

(from Heine) increase U in steps of say 0.1 (or smaller) and reuse the density and/or wave functions from the previous calculation? This tends to reduce the problem of being trapped in meta-stable electronic states, and it also makes convergence easier. Monica later reported that this helped.

Cu

Ask Hjorth Larsen

first mixer parameter should probably be 0.1 for faster convergence, because it has a low DOS at the Fermi level. (Other transition metals may require lower values.)

N on Co/Ni (with BEEF)

Tuhin

rmm-diis and MixerSum(beta=0.1, nmaxold=5, weight=50)

Other Tricks:

GPAW Planewave Mode

Jens Jurgen has a post here that discusses how to select plane wave mode in your script.

It looks like we have to manually turn off the real-space parallelization with the keyword:

parallel={'domain': 1}

In planewave mode I believe we also can only parallelize over reduced k-points, spins, and bands. We have to manually set the right numbers for these to match the numbers of CPUs.

GPAW Geometry Optimizations

Thoughts from Lin and Thomas: With GPAW one can do geometry optimizations a factor of 10 faster in LCAO mode (with smaller memory requirements). Then it's necessary to "tweak" the optimization with a little bit of running in FD mode.

Plus, LCAO mode has the added feature that convergence is typically easier, according to Heine.

I think it's difficult to automate the above process in one script, since the number of cores required for LCAO is typically lower than FD (because of the lower memory usage).

But if you're limited by CPU time when doing GPAW optimizations it might be worth keeping the above in mind.

AJ adds: I would also warn against using LCAO as an initial guess for NEB calculations. I have tried this for 2 different systems and it turned out to be a tremendous waste of time. The NEB did not converge much faster with LCAO, and when I used the LCAO images as an initial guess for finite difference mode it still took several restarts to converge. I have had better luck using decreased k-point/grid spacing and relaxed convergence criteria as an initial optimization for NEBs.

GPAW Memory Estimation

The get a guess for the right number of nodes to run on for GPAW, run the
following line interactively:

gpaw-python <yourjob>.py --dry-run=<numberofnodes>
(e.g. gpaw-python graphene.py --dry-run=16)

Number of nodes should be a multiple of 8 for the suncat farm,
multiples of 12 for the suncat2 farm. The above will run quickly
(because it doesn't do the calculation). Then check that the
following number is <3GB for the 8-core suncat farm, <4GB for the 12-core suncat2 farm:

Memory estimate
---------------
Calculator  574.32 MiB

Tips for Running with BEEF

If you use the BEEF functional:

Building a Private Version of GPAW

Some notes:

Jacapo Parallel NEB Example

You can find a Jacapo parallel NEB example here. This same script can be used for a restart (the interpolated traj files are only recreated if they don't exist).

Some important notes:

Perturbing Jacapo Calculations

When taking an existing Jacapo calculation and making changes (e.g. adding an external field) it is important to not instantiate a new calculator (to work around some Jacapo bugs) but instead read in the previous atoms/calculator from the .nc file. Johannes Voss has kindly provided an example with some comments here.