
Subspace method for multiparametereigenvalue problems based on tensortrain representations
In this paper we solve mparameter eigenvalue problems (mEPs), with m an...
read it

A residual concept for Krylov subspace evaluation of the φ matrix function
An efficient Krylov subspace algorithm for computing actions of the φ ma...
read it

An accurate restarting for shiftandinvert Krylov subspaces computing matrix exponential actions of nonsymmetric matrices
An accurate residual–time (AccuRT) restarting for computing matrix expon...
read it

Enhanced image approximation using shifted rank1 reconstruction
Low rank approximation has been extensively studied in the past. It is m...
read it

Asymptotic convergence of spectral inverse iterations for stochastic eigenvalue problems
We consider and analyze applying a spectral inverse iteration algorithm ...
read it

A parameterdependent smoother for the multigrid method
The solution of parameterdependent linear systems, by classical methods...
read it

ART: adaptive residualtime restarting for Krylov subspace matrix exponential evaluations
In this paper a new restarting method for Krylov subspace matrix exponen...
read it
TensorKrylov method for computing eigenvalues of parameterdependent matrices
In this paper we extend the Residual Arnoldi method for calculating an extreme eigenvalue (e.g. largest real part, dominant,...) to the case where the matrices depend on parameters. The difference between this Arnoldi method and the classical Arnoldi algorithm is that in the former the residual is added to the subspace. We develop a TensorKrylov method that applies the Residual Arnoldi method (RA) for a grid of parameter points at the same time. The subspace contains an approximate Krylov space for all these points. Instead of adding the residuals for all parameter values to the subspace we create a lowrank approximation of the matrix consisting of these residuals and add only the column space to the subspace. In order to keep the computations efficient, it is needed to limit the dimension of the subspace and to restart once the subspace has reached the prescribed maximal dimension. The novelty of this approach is twofold. Firstly, we observed that a large error in the lowrank approximations is allowed without slowing down the convergence, which implies that we can do more iterations before restarting. Secondly, we pay particular attention to the way the subspace is restarted, since classical restarting techniques give a too large subspace in our case. We motivate why it is good enough to just keep the approximation of the searched eigenvector. At the end of the paper we extend this algorithm to shiftandinvert Residual Arnoldi method to calculate the eigenvalue close to a shift σ for a specific parameter dependency. We provide theoretical results and report numerical experiments. The Matlab code is publicly available.
READ FULL TEXT
Comments
There are no comments yet.