Stochastic collocation method for computing eigenspaces of parameter-dependent operators

We consider computing eigenspaces of an elliptic self-adjoint operator depending on a countable number of parameters in an affine fashion. The eigenspaces of interest are assumed to be isolated in the sense that the corresponding eigenvalues are separated from the rest of the spectrum for all values of the parameters. We show that such eigenspaces can in fact be extended to complex-analytic functions of the parameters and quantify this analytic dependence in way that leads to convergence of sparse polynomial approximations. A stochastic collocation method on an anisoptropic sparse grid in the parameter domain is proposed for computing a basis for the eigenspace of interest.



There are no comments yet.


page 1

page 2

page 3

page 4


Computing Limits of Quotients of Multivariate Real Analytic Functions

We present an algorithm for computing limits of quotients of real analyt...

A sorting algorithm for complex eigenvalues

We present SPEC-RE, a new algorithm to sort complex eigenvalues, generat...

Desingularization in the q-Weyl algebra

In this paper, we study the desingularization problem in the first q-Wey...

Inverses of Matern Covariances on Grids

We conduct a theoretical and numerical study of the aliased spectral den...

Analytic regularity and stochastic collocation of high dimensional Newton iterates

In this paper we introduce concepts from uncertainty quantification (UQ)...

Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step

In this paper we propose and analyze inexact and stochastic versions of ...

On the training of sparse and dense deep neural networks: less parameters, same performance

Deep neural networks can be trained in reciprocal space, by acting on th...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multiparametric eigenvalue problems, i.e., eigenvalue problems of operators that depend on a large number of real parameters, arise in a variety of contexts. One may think of optimization of the spectrum of structures which depend on a number of design parameters, but also uncertainty quantification of engineering systems with data uncertainty. Recent literature has considered examples of mechanical vibration problems, where a parametrization of the uncertainties in either the physical coefficients or the geometry of the system results in a multiparametric eigenvalue problem, see e.g. [18, 14, 8, 17, 7, 10].

In recent years several numerical methods have been suggested for solving multiparametric eigenvalue problems. The focus has been on spectral methods, which are based on polynomial approximations of the solution in the parameter domain and which have been shown to exhibit superior convergence rates compared to traditional Monte Carlo methods [19, 15, 3, 4]. These typically take the form of stochastic collocation methods or the form of matrix iterations which rely on stochastic Galerkin approximation of the solution. A benchmark for the first class of methods is the sparse anisotropic collocation algorithm analyzed by Andreev and Schwab in [1]. In the latter class of methods many different variants have been proposed over the years [18, 14, 6, 9]. Quite recently, low-rank methods have also been intruduced [16, 2, 5].

By nature, the spectral methods considered above rely on the assumption that the solution is smooth with respect to the input parameters. More precisely, these methods exhibit optimal rates of convergence only if the eigenpair of interest depends complex-analytically on the vector of paremeters. This analytic dependence has been established for nondegenerate eigenvalues and associated eigenvectors in

[1]. For such eigenpairs we therefore have optimal rates of convergence for stochastic collocation algorithms, see [1] for details, and optimal asymptotic rates of convergence for the iterative Galerkin based algorithms considered in [9]. However, these results do not apply to cases where the eigenvalues are of higher multiplicity or where they are allowed to cross within the parameter space. As noted in e.g. [10], many interesting engineering applications admit eigenvalues that are clustered close together and therefore the aforementioned eigenvalue crossings may not be avoided when these problems are cast into the parameter-dependent setting.

In this work we consider eigenspaces of an elliptic self-adjoint operator that depends on a countable number of parameters in an affine fashion. We extend the results in [1] on analyticity to cover eigenspaces associated to possibly clustered eigenvalues. The underlying assumption is that the eigenspace of interest is isolated in the sense that the corresponding eigenvalues are separated from the rest of the spectrum for all values of the input parameters. We show that the spectral projection operator associated to such an isolated eigenspace can in fact be extended to a complex-analytic function of the input parameters. This allows us to construct a well-defined and smooth basis for the eigenspace of interest and show that optimal convergence rates hold when the basis vectors are approximated using a conveniently chosen set of orthogonal polynomials. We consider the stochastic collocation method defined on an anisoptropic sparse grid in the parameter domain, similar to the one in [1], for computing a basis for the eigenspace of interest.

2 Problem formulation

We consider a class of self-adjoint operators that depend on a countable number of real parameters in an affine fashion. This affine dependence is often of independent interest but may also result from first order approximation of more general smooth dependence. In particular, the commonly used model problem for a stochastic diffusion operator falls within our framework.

2.1 Multiparametric variational eigenvalue problems

Let and be separable Hilbert spaces over and denote the associated inner products by and and norms by and . Assume that and form the so-called Gel’fand triple with dense and compact embeddings. We denote by the space of bounded linear operators from to its dual . Furthermore, we denote by the duality pairing on and , which may be interpreted as an extension of the inner product .

For each let be a symmetric and continuous bilinear form, which we can associate with an operator using

Suppose that there exists such that


and a sequence of positive real numbers such that and


We define a multiparametric bilinear form


where is a vector of parameters, each of which takes values in a closed interval of . Without loss of generality we may assume a scaling such that . We associate the form (3) with an operator given by

Remark 1.

The ellipticity condition (1) could be weakened by assuming that

for some and . This can be reduced to the elliptic case using a standard shift procedure.

The assumptions above imply that is uniformly bounded and uniformly elliptic, i.e.,


for some and . Consider the following multiparametric eigenvalue problem: find and such that


or in variational form


The Lax-Milgram lemma guarantees that for any the operator is boundedly invertible and its inverse is compact due to the compact embedding

. Therefore, the problem admits a countable number of real eigenvalues of finite multiplicity and associated eigenfunctions that form an orthogonal basis of


Remark 2.

A commonly used model problem is the stochastic diffusion eigenvalue problem on


where the diffusion coefficient is a random field expressed in its Karhunen-Loève expansion


Indeed (if is nice enough) the variational formulation of (7) is given by (6) with the choice , and

It is now easy to see that the inequalities (1) and (2) are satisfied if and are such that .

We will assume an increasing enumeration of the eigenvalues so that

where each eigenvalue may be listed several times according to its multiplicity. We denote by a set of associated eigenfunctions which are orthonormal in for every . Ultimately we would like to compute any given subset of the eigenpairs of problem (5). However, due to possible eigenvalue crossings, this may sometimes be an extremely difficult task to perform computationally, see e.g. [10, 9]. Therefore, we will work under the assumption that the eigenspace of interest is isolated, i.e., the associated eigenvalues are strictly separated from the rest of the spectrum.

2.2 Isolated eigenspaces

Let and denote its cardinality. For let denote a set of eigenvalues of the problem (5) and denote the associated eigenspace. We use a shorthand notation for the eigenspace with . We call an eigenspace isolated with parameter (or simply just isolated) if

A set of functions is called a basis of if

Moreover, this basis is called orthonormal if is orthonormal in for every . In the context of this paper we are interested in computing a basis for a given isolated eigenspace . We aim to demonstrate that, though the set of eigenvectors clearly is an orthonormal basis of , it may not always be computationally the most accessible one.

Remark 3.

Note that even if the eigenspace is isolated, double eigenvalues or eigenvalue crossings may still exist within the set . In other words, we might have and for some and .

The following is an adaptation of the classical theorem by Weyl.

Proposition 1.

Under assumptions (1) and (2) the eigenvalues of the problem (5) satisfy


Recall the min-max characterization of eigenvalues. For let denote the set of all subspaces of with dimension equal to . Given a subspace we set . For some we now have


It follows that

Similarly for some we have


so that

As a corollary we obtain sufficient criteria for an eigenspace to be isolated. For simplicity we state these only in the case of an eigenspace with .

Corollary 1.

Assume (1) and (2). Given let

Then the eigenspace of the problem (5) is isolated with parameter


Clearly . By Proposition 1 we have

for all . ∎

2.3 Canonical bases

Given a set with cardinality , we define a canonical basis for the eigenspace by setting

Here denotes the th element in any fixed permutation of . Observe that the canonical basis vectors now only depend on the eigenspace and not on the choice of the individual eigenvectors . Moreover, if the matrix is nonsingular, then is in fact a basis for . Note that need not be orthonormal for

and that the inverse of the lowermost singular value of the Gram matrix

denotes the condition number of the basis and is uniformly bounded away from infinite due to the spectral separation assumption as given by the standard results for the convergence radii for the perturbation expansions of spectral projections from [13].

3 Analyticity of isolated eigenspaces

Next we will prove that any isolated eigenspace is in fact analytic with respect to the parameter vector in a suitable sense. To this end we extend our analysis for complex valued arguments: In this section we assume that and are separable Hilbert spaces over and extend the inner products and as well as the duality pairing for complex-valued arguments sesquilinearly. Now (4) can be treated as the restriction to of the operator-valued function

We equip with the Hausdorff topology so that this fits the framework of [11].

3.1 Riesz spectral projection

For let be a closed curve in the complex plane, which encloses a set of eigenvalues of , denoted by , but no other elements in the spectrum of . We define the spectral projection

We call the mapping analytic if is analytic, i.e., is analytic for all . Note that is a standard complex function of a complex variable.

The canonical basis from section 2.3 can now be expressed as

3.2 Analyticity in one parameter

We start with eigenspaces of an operator depending on a single parameter . In other words we consider the eigenvalues of (5) when


Here (9) should be understood as the restriction to of the operator-valued function

The assumptions (1) and (2) now imply




for some and . We obtain the following result.

Proposition 2.

Consider the problem (5) with for , i.e., is of the form (9) and satisifies (10) and (11). Given a finite assume that the eigenspace is isolated with parameter for . Then it admits a complex-analytic extension to the region


Moreover, for every the spectrum of is separated into two parts and such that .


Assume first that is a set of consecutive natural numbers. Let and denote . Let be the positively oriented circle of radius

centered at

Then encloses but no elements of . Moreover, for every we have

Due to (10) and (11) we have

so that

for all . By Remark VII.2.9 in [13] there exists such that for all the spectrum of is separated into two parts and by the curve . Moreover, for such values of the spectral projection is complex-analytic. In fact we may set and in the definition of and obtain

Since was arbitrary we conclude that is complex-analytic in .

An arbitrary may always be partitioned in such a way that each partition is a set of consecutive natural numbers. The previous proof applies for all partitions separately and thus the spectrum of is separated for all and the total projection is complex-analytic in . ∎

3.3 Analyticity in a countable number of parameters

We start with a simple Lemma that can be deduced from standard perturbation theory for analytic operators, see Chapter VII in [13].

Lemma 1.

Let and be such that the spectrum of can be separated into two parts and with . For let denote the :th unit vector in . Then there exists such that the eigenspace is complex-analytic for all such that .

Suppose now that for some . Then we have the following result.

Theorem 1.

Consider the problem (5) with assumptions (1) and (2). Assume that for some . Given a finite assume that the eigenspace is isolated with parameter for . Then it admits a complex-analytic extension in the region


for arbitrary .


Let and take such that for all . Denote . We now have



Proposition 2 now applies for the shifted operator

and therefore the associated eigenspace can be extended to a function which is complex-analytic for all such that

In particular the eigenspace is analytic in the vicinity of . By Lemma 1 the eigenspace is now separately complex-analytic in the vicinity of . Since was arbitrary, we see that the eigenspace is separately complex-analytic in . Therefore, we may take Hartogs’s theorem (Theorem 2.2.8 in [12]) and extend it to infinite dimensions (Definition 2.3.1, Proposition 3.1.2 and Theorem 3.1.5 in [11]) to see that the eigenspace is jointly complex-analytic in . ∎

4 Stochastic collocation for computing eigenspaces

As in [1] we introduce an anisotropic sparse grid collocation operator defined with respect to a finite multi-index set. Let denote the set of all multi-indices with finite support, i.e.,

where . Given a finite set we define the greatest active dimension . For we write if for all . We call the multi-index set monotone if whenever is such that for some , then .

Let be the univariate Legendre polynomial of degree . Denote by the zeros of

. We define the one-dimensional Lagrange interpolation operators


where are the related Lagrange basis polynomials of degree . Given a finite set we may now define the sparse collocation operator as


The operator (12) may be rewritten in a computationally more convenient form


where . We see that the complete grid of collocation points is now given by

Observe that for every we have when . For monotone multi-index sets we have

and the number of collocation points admits the bound

as shown in [1].

The following convergence estimate is similar to the ones in

[1] and [9].

Proposition 3.

Let be a Hilbert space. Assume that admits a complex-analytic extension in the region



Then for each and there exists such that


and as we have


Suppose now that the eigenspace of the problem (5) is isolated for some finite . In addition to (1) and (2) assume that for some . Then by Theorem 1 and Proposition 3 we have the optimal convergence rate


when the stochastic collocation algorithm is used to approximate the canonical basis vectors of .

Remark 4.

We can apply the Gram–Schmidt process at every collocation point in order to obtain an approximately orthonormal basis for



  • [1] R. Andreev and C. Schwab, Sparse tensor approximation of parametric eigenvalue problems, in Lecture notes in computational science and engineering, vol. 83, Springer Berlin Heidelberg, 2012, pp. 203–241.
  • [2] P. Benner, A. Onwunta, and M. Stoll, A low-rank inexact newton–krylov method for stochastic eigenvalue problems, Computational Methods in Applied Mathematics, 19 (2018), pp. 5–22.
  • [3] M. Bieri, R. Andreev, and C. Schwab, Sparse tensor discretization of elliptic sPDEs, SIAM Journal on Scientific Computing, 31 (2009), pp. 4281–4304.
  • [4] A. Cohen, R. DeVore, and C. Schwab, Convergence rates of best N-term Galerkin approximations for a class of elliptic sPDEs, Foundations Computational Mathematics, 10 (2010), pp. 615–646.
  • [5] H. Elman and T. Su, Low-rank solution methods for stochastic eigenvalue problems, SIAM Journal on Scientific Computing, 41 (2019), pp. A2657–A2680.
  • [6] H. Hakula, V. Kaarnioja, and M. Laaksonen, Approximate methods for stochastic eigenvalue problems, Appl. Math. Comput., 267 (2015), pp. 664–681.
  • [7] H. Hakula, V. Kaarnioja, and M. Laaksonen, Cylindrical shell with junctions: Uncertainty quantification of free vibration and frequency response analysis, Shock and Vibration, Volume 2018 (2018).
  • [8] H. Hakula and M. Laaksonen, Hybrid stochastic finite element method for mechanical vibration problems, Shock and Vibration, Volume 2015 (2015).
  • [9]  , Asymptotic convergence of spectral inverse iterations for stochastic eigenvalue problems, Numerische Mathematik, 142 (2019), pp. 577–609.
  • [10]  , Multiparametric shell eigenvalue problems, Computer Methods in Applied Mechanics and Engineering, 343 (2019), p. 721–745.
  • [11] M. Hervé, Analyticity in Infinite Dimensional Spaces, vol. 10 of De Gruyter Studies in Mathematics, Walter De Gruyter Inc, 1989.
  • [12] L. Hörmander, An Introduction to Complex Analysis in Several Variables, The University Series in Higher Mathematics, D. Van Nostrand, 1st ed., 1966.
  • [13] T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag Berlin Heidelberg, 2nd ed., 1995.
  • [14] H. Meidani and R. Ghanem, Spectral power iterations for the random eigenvalue problem, AIAA Journal, 52 (2014), pp. 912–925.
  • [15] C. Schwab and R. A. Todor, Convergence rates for sparse chaos approximations of elliptic problems with stochastic coefficients, IMA Journal of Numerical Analysis, 27 (2006), pp. 232–261.
  • [16] P. Sirković, Low-rank methods for parameter-dependent eigenvalue problems and matrix equations, PhD thesis, École Polytechnique Fédérale de Lausanne, 2016.
  • [17] B. Sousedík and H. C. Elman, Inverse subspace iteration for spectral stochastic finite element methods, SIAM / ASA Journal on Uncertainty Quantification, 4 (2016), pp. 163–189.
  • [18] C. V. Verhoosel, M. A. Gutiérrez, and S. J. Hulshoff, Iterative solution of the random eigenvalue problem with application to spectral stochastic finite element systems, International Journal for Numerical Methods in Engineering, 68 (2006), pp. 401–424.
  • [19] D. Xiu and J. S. Hesthaven, High-order collocation methods for differential equations with random inputs, SIAM Journal on Scientific Computing, 27 (2005), pp. 1118–1139.