next up previous contents
Next: Update expressions and initial Up: Second derivative methods Previous: Rational Function Optimization   Contents


Direct Inversion of Iterative Space (DIIS)

The DIIS method [132] was firstly applied to SCF convergence problems and then addressed to geometry optimizations [133]. The method is suitable at the vicinity of the stationary point, and it is based on a linear interpolation/extrapolation of the available structures so as to minimize the length of an error vector.

The DIIS method minimizing the gradient (GDIIS) has been implemented in section 3.1 in the RFO framework, therefore it will be briefly described.

We want to obtain a corrected gradient $ \bar{{\bf g}}$ as a combination of the previous $ m$ gradient vectors $ \{{\bf g}_i\}$

$\displaystyle \bar{{\bf g}}=\sum_{i=1}^{m} c_i \cdot {\bf g}_i$ (2.80)

The error function to minimize is $ \bar{{\bf g}}\cdot \bar{{\bf g}}$ with the condition $ {\bf 1}^T{\bf c}=1$
Then, building the corresponding Lagrangian in the matrix form

$\displaystyle L({\bf c},\lambda)=\frac{1}{2}{\bf c}^{T}{\bf G}{\bf c}-\lambda({\bf c}^{T}{\bf 1}-1)$ (2.81)

Where $ {\bf G}_{ij}=\big( {\bf g}_i\cdot {\bf g}_j\big)$ is the matrix containing the scalar products between the gradients of the last $ m$ steps. $ {\bf c}$ is the coefficient vector containing as much components as iterations. The dimension of matrix $ {\bf G}$ and vector $ {\bf c}$ is equal to the number of iterations. Derivating 1.81 with respect to $ \lambda$ and to $ {\bf c}$ and imposing the stationary condition

$\displaystyle \nabla_{\bf {c}}L={\bf Gc-I}\lambda={\bf0}$ (2.82)

$\displaystyle \frac{\delta L}{\delta \lambda}=-({\bf 1}^T{\bf c}-1)={\bf0}$ (2.83)

Joining both derivative conditions in matrix form

$\displaystyle \left(\begin{array}{cc} {\bf G} & -{\bf 1} \\  -{\bf 1}^T & 0 \en...
...mbda \end{array}\right) =\left(\begin{array}{c} {\bf0} \\  -1\end{array}\right)$ (2.84)

and then

$\displaystyle \left(\begin{array}{c} {\bf c} \\  \lambda \end{array}\right) =\l...
... \end{array}\right)^{-1} \left(\begin{array}{c} {\bf0} \\  -1\end{array}\right)$ (2.85)

To obtain the coefficients $ {\bf c}$ only requires the inversion of a matrix as large as the number of iterations + 1. From the previous system of equations we can have the improved gradient and then build up the improved Augmented Hessian

$\displaystyle \left(\begin{array}{cc} 0 & \bar{{\bf g}}^T \\  \bar{{\bf g}} & {...
... \left(\begin{array}{cc} 0 & {\bf g}^T \\  {\bf g} & {\bf B} \end{array}\right)$ (2.86)


next up previous contents
Next: Update expressions and initial Up: Second derivative methods Previous: Rational Function Optimization   Contents
Xavier Prat Resina 2004-09-09