GPAW Convergence Behavior

A talk given by Ansgar Schaefer studying convergence behaviour for rutiles is here (pdf).

General suggestions for helping GPAW convergence are here.

A discussion and suggestions for converging some simple systems can be found here.

Other convergence experience:

System

Who

Action

Graphene with vacancy

Felix Studt

Increase Fermi Temp from 0.1 to 0.2, use cg

Graphene with vacancy

Chris O'Grady

change nbands from -10 to -20, MixerDif(beta=0.03, nmaxold=5, weight=50.0)

Nitrogenase FeVCo for CO2 reduction

Lars Grabow

use Davidson solver (faster as well?), although later jvarley said MixerSum

Several surfaces

Andy Peterson

Broyden mixer with Beta=0.5

TiO2

Monica Garcia-Mota

MixerSum(0.05,6, 50.)

MnxOy

Monica Garcia-Mota

Broyden MixerSum

Co3O4

Monica Garcia-Mota

Davidson eigensolver, MixerSum(beta=0.05, nmaxold=5, weight=50) or MixerSum(beta=0.05, nmaxold=6, weight=100)

MnO2 with DFT+U U=+2eV

Monica Garcia-Mota

Marcin suggests we disable the DipoleCorrectionPoissonSolver (not yet tested)

MnO2 with DFT+U U=+2eV

Monica Garcia-Mota

Henrik Kristofferson suggests: convergence is easier with high U (U=4eV) and then
one can shift to preferred value

MnO2 with DFT+U U=+2eV

Monica Garcia-Mota

(from Heine) increase U in steps of say 0.1 (or smaller) and reuse the density and/or wave functions from the previous calculation? This tends to reduce the problem of being trapped in meta-stable electronic states, and it also makes convergence easier. Monica later reported that this helped.

Other tricks:

GPAW Planewave Mode

Jens Jurgen has a post here that discusses how to select plane wave mode in your script.

It looks like we have to manually turn off the real-space parallelization with the keyword:

parallel={'domain': 1}

In planewave mode I believe we also can only parallelize over reduced k-points, spins, and bands. We have to manually set the right numbers for these to match the numbers of CPUs.

GPAW Memory Estimation

The get a guess for the right number of nodes to run on for GPAW, run the
following line interactively:

gpaw-python <yourjob>.py --dry-run=<numberofnodes>
(e.g. gpaw-python graphene.py --dry-run=16)

Number of nodes should be a multiple of 8. This will run quickly
(because it doesn't do the calculation). Then check that the
following number is <3GB for the 8-core farm, <4GB for the 12-core farm:

Memory estimate
---------------
Calculator  574.32 MiB

Tips for Running with BEEF

If you use the BEEF functional:

Building a Private Version of GPAW

Some notes:

Jacapo Parallel NEB Example

You can find a Jacapo parallel NEB example here. Some lines need to change for a restart. An example is here.

Some important notes: