next up previous contents
Next: ABNR Up: Second order methods for Previous: Second order methods for   Contents


Limited memory: L-BFGS

The limited memory strategy avoids two of the bottlenecks specified above in the Newton-Raphson algorithm when applied to big systems. The storage and the inversion of big matrices. Here although a quasi-Newton-Raphson algorithm is used, the inverse of the Hessian matrix is never built up, but directly the product of the inverse of the Hessian by the gradient. Then, no Hessian diagonalization is required.

$\displaystyle {\bf B}_{k+1}{\bf v}={\bf B}_0{\bf v}+\sum_{i=k-m}^k[{\bf j}_i{\b...
...bf u}_i({\bf j}_i^T {\bf v}-({\bf j}_i^T \Delta {\bf q}_i){\bf u}_i^T {\bf v})]$ (2.93)

For instance, in equation 1.93, if the vector v is the gradient of the current geometry and $ {\bf B_0}$ the inverse of the initial Hessian we obtain directly by a vector-matrix product the displacement of Newton-Raphson equation for the $ k+1$ step (equation 1.75). A diagonal matrix must be used for $ {\bf B_0}$ unless we want to face an expensive full inversion process for a full Hessian. Unit matrix is usually a good choice.

What makes this method powerful is that in order to update this matrix product only information of last m steps is used. In this way only the geometry and gradient of the last m steps have to be stored. When a BFGS update formula is used this procedure is called L-BFGS. This method was developed by Jorge Nocedal [152,153]. The source code can be obtained free of charge from the web. Recently, Nocedal and co-workers [154] have combined the LBFGS with a Hessian free Newton method that improves the efficiency in the minimization process.

This useful method for minima, as it will be explained later, cannot be applied to transition state search. Note that there is no control on the number of positive eigenvalues and, as we already said, the BFGS formula is not suitable for TS.

Other strategies based on the idea of limited memory can be adopted when another update scheme is more adequate. This is, for example, the limited memory Powell [155,2] that will be explained and tested in section 3.4.


next up previous contents
Next: ABNR Up: Second order methods for Previous: Second order methods for   Contents
Xavier Prat Resina 2004-09-09