next up previous contents
Next: Molecular Dynamics and Free Up: Avoiding the memory problem Previous: General scheme for the   Contents

Results and discussion

We coupled the Bofill's LNR diagonalization method to the RFO geometry optimization source code commented in section 3.1. In addition we tested several shapes for the initial Hessian as displayed in figure 3.8.

After many different implementations the results were not satisfactory. The iterative diagonalization did not converge after 100 steps. Recall that 100 steps means that the internal subspace has a dimension 100 and at every iteration an internal optimization manipulating a Hessian as big as the number of external iterations has to be performed. This fact increases the computational cost. In conclusion, it seems that a method that is very powerful to diagonalize any kind of Hamiltonian matrix [256] was unable to converge for the Augmented Hessian diagonalization.

In small test cases we observed that in the inexact LNR equations, when the matrix in the denominator of equation 3.19 which must have an affordable inversion is less approximated, the inexact LNR equations are not that inexact and the diagonalization process converges along with the geometry optimization. This is the case when this approximated matrix is represented by a square+vector where the square part has about the half of the dimension of the whole space. This case in big matrices is not affordable and the square part must be smaller.

We compared the LNR iterative diagonalization with the original Davidson method and the latter failed also. We tried to increase the convergence criteria for the iterative diagonalization, and proceed computing the corresponding geometry displacement and tried to optimize the geometry. The displacement vectors were not good enough to reach the stationary point. The search was not successful even combining the method with an improvement of the displacement vector by DIIS strategy (introduction section 1.3.4.3).

In conclusion, after these failed attempts we conclude that our Augmented Hessian was very difficult to diagonalize by these kind of methods. The origin of the problem could be due to the non-sparsity of a Hessian in Cartesian coordinates representation or due to an intrinsic redundancy of the Augmented Hessian matrix (it can be shown that the gradient can be obtained by linear combination of the eigenvectors of the Hessian). This last argument is less important if we take into account that Bofill and co-workers obtained successful results in small systems where internal coordinates are used [155].


next up previous contents
Next: Molecular Dynamics and Free Up: Avoiding the memory problem Previous: General scheme for the   Contents
Xavier Prat Resina 2004-09-09