Tensor-Krylov method for computing eigenvalues of parameter-dependent matrices

06/12/2020 ∙ by Koen Ruymbeek, et al. ∙ 0

In this paper we extend the Residual Arnoldi method for calculating an extreme eigenvalue (e.g. largest real part, dominant,...) to the case where the matrices depend on parameters. The difference between this Arnoldi method and the classical Arnoldi algorithm is that in the former the residual is added to the subspace. We develop a Tensor-Krylov method that applies the Residual Arnoldi method (RA) for a grid of parameter points at the same time. The subspace contains an approximate Krylov space for all these points. Instead of adding the residuals for all parameter values to the subspace we create a low-rank approximation of the matrix consisting of these residuals and add only the column space to the subspace. In order to keep the computations efficient, it is needed to limit the dimension of the subspace and to restart once the subspace has reached the prescribed maximal dimension. The novelty of this approach is twofold. Firstly, we observed that a large error in the low-rank approximations is allowed without slowing down the convergence, which implies that we can do more iterations before restarting. Secondly, we pay particular attention to the way the subspace is restarted, since classical restarting techniques give a too large subspace in our case. We motivate why it is good enough to just keep the approximation of the searched eigenvector. At the end of the paper we extend this algorithm to shift-and-invert Residual Arnoldi method to calculate the eigenvalue close to a shift σ for a specific parameter dependency. We provide theoretical results and report numerical experiments. The Matlab code is publicly available.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.