DeepAI
Log In Sign Up

Randomized sketching of nonlinear eigenvalue problems

11/22/2022
by   Stefan Güttel, et al.
0

Rational approximation is a powerful tool to obtain accurate surrogates for nonlinear functions that are easy to evaluate and linearize. The interpolatory adaptive Antoulas–Anderson (AAA) method is one approach to construct such approximants numerically. For large-scale vector- and matrix-valued functions, however, the direct application of the set-valued variant of AAA becomes inefficient. We propose and analyze a new sketching approach for such functions called sketchAAA that, with high probability, leads to much better approximants than previously suggested approaches while retaining efficiency. The sketching approach works in a black-box fashion where only evaluations of the nonlinear function at sampling points are needed. Numerical tests with nonlinear eigenvalue problems illustrate the efficacy of our approach, with speedups above 200 for sampling large-scale black-box functions without sacrificing on accuracy.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/25/2019

NEP: a module for the parallel solution of nonlinear eigenvalue problems in SLEPc

SLEPc is a parallel library for the solution of various types of large-s...
04/28/2022

Benchmarking the Hooke-Jeeves Method, MTS-LS1, and BSrr on the Large-scale BBOB Function Set

This paper investigates the performance of three black-box optimizers ex...
03/13/2020

Algorithms for the rational approximation of matrix-valued functions

A selection of algorithms for the rational approximation of matrix-value...
06/25/2020

Derivative Interpolating Subspace Frameworks for Nonlinear Eigenvalue Problems

We first consider the problem of approximating a few eigenvalues of a pr...
07/06/2020

An Iterative Method for Contour-Based Nonlinear Egensolvers

Contour integration techniques have become a popular choice for solving ...
09/27/2022

Adaptive approximation of nonlinear eigenproblems by minimal rational interpolation

We describe a strategy for solving nonlinear eigenproblems numerically. ...

1 Introduction

This work is concerned with approximating vector-valued and matrix-valued functions, with a particular focus on functions that arise in the context of nonlinear eigenvalue problems (NLEVP)

(1)

Recent algorithmic advances have made it possible to efficiently compute an accurate rational approximant of a scalar function on a compact (and usually discrete) target set  in the complex plane . Some of the available methods are the adaptive Antoulas–Anderson (AAA) algorithm [nakatsukasa2018aaa], the rational Krylov fitting (RKFIT) algorithm [berljafa2017rkfit], vector fitting [gustavsen1999rational]

, minimal rational interpolation 

[Pradovera2020], and methods based on Löwner matrices [mayo2007framework]. All of these methods can be used or adapted to approximate multiple scalar functions on the target set simultaneously. In particular, there are several variants of AAA that approximate multiple functions by a family of rational interpolants sharing the same denominator, including the set-valued AAA algorithm [lietaert2018automatic], fastAAA [hochman2017fastaaa], and weighted AAA [NGT20]. See also [EG21] for a discussion and comparison of some of these methods.

AAA-type algorithms can be used with very little user input and have enabled an almost black-box approximation of NLEVPs or transfer functions in model order reduction. However, the computation of degree- rational interpolants via the set-valued AAA algorithm involves the solution of  least squares problems with dense matrices of sizes  for varying , adding up to complexity. The greedy search of interpolation nodes in AAA also requires the repeated evaluation of the  rational interpolants at all sampling points in  and the storage of the corresponding function values. As a consequence, the main use case for the multiple-function AAA variants to date have been problems that can be written in the split form

(2)

where is small (say, in the order of ), are known scalar functions, and are fixed coefficient matrices. While, in principle, it is always possible to write an arbitrary matrix-valued function  in split form using terms, it would be prohibitive to apply the set-valued AAA approach to large-scale problems in such a naive way.

The work [EG19] suggested an alternative approach where the original (scalar-valued) AAA algorithm is applied to a scalar surrogate with random probing vectors and , resulting in a rational interpolant in barycentric form

(3)

with support points  (the interpolation nodes) and weights . Using this representation, a rational interpolant  of the original function is then obtained by replacing in (3) every occurrence of by the evaluation :

(4)

The intuition is that both and will almost surely have the same region of analyticity, hence interpolating  using the same interpolation nodes and poles as for  should result in a good approximant. This surrogate approach indeed alleviates the complexity and memory issues even when has a large number of terms in its split form (2), and it can also be applied if is only available as a black-box function returning evaluations . However, a comprehensive numerical comparison in [NGT20] in the context of solving NLEVPs (1) has revealed that this surrogate approach is not always reliable and may lead to poor accuracy. Indeed, the -uniform error for the original problem

(5)

can be significantly larger than the error for the scalar surrogate problem . In order to mitigate this issue for black-box functions —i.e., those not available in split form (2)—a two-phase surrogate AAA algorithm with cyclic Leja–Bagby refinement has been developed in [NGT20]. While this algorithm is indeed robust in the sense that it returns a rational approximant with user-specified accuracy, it is computationally more expensive than the set-valued AAA algorithm and sometimes returns approximants of unnecessarily high degree; see, e.g., [NGT20, Table 5.2].

In this work, we propose and analyze a new sketching approach for matrix-valued functions called sketchAAA that, with high probability, leads to much better approximants than the scalar surrogate AAA approach. At the same time, it remains equally efficient. While we demonstrate the benefits of the sketching approach in combination with the set-valued AAA algorithm and mainly test it on functions from the updated NLEVP benchmark collection [betcke2013nlevp, higham2019updated], the same technique can be combined with other linear interpolation techniques (including polynomial interpolation at adaptively chosen nodes). Our sketching approach is not limited to nonlinear eigenvalue problems either and can be used for the approximation of any vector-valued function. The key idea is to sketch using a thin (structured or unstructured) random probing matrix , i.e., computing samples of the form , and then to apply the set-valued AAA algorithm to the  components of the samples. We provide a probabilistic assessment of the approximation quality of the resulting samples by building on the improved small-sample bounds of the matrix Frobenius norm in [gratton2018improved].

The remainder of this work is organized as follows. In section 2 we briefly review the AAA algorithm [nakatsukasa2018aaa] and the surrogate approach from [EG19]

, and then introduce our new probing forms using either tensorized or non-tensorized probing vectors. We also provide a pseudocode of the resulting

sketchAAA algorithm. Section 3 is devoted to the analysis of the approximants produced by sketchAAA . Our main result for the case of non-tensorized probing, Theorem 3.2, provides an exponentially converging (in the number of samples ) probabilistic upper bound on the approximation error of the sketched problem compared to the approximation of the full problem. We also provide a weaker result for tensorized probing in the case of , covering the original surrogate approach in [EG19]. Several numerical tests in section 4 on small to large-scale problems show that our new sketching approach is reliable and efficient. For some of the large-scale black-box problems we report speedup factors above 200 compared to the set-valued AAA algorithm while retaining comparable accuracy.

2 Surrogate functions and the AAA algorithm

The AAA algorithm [nakatsukasa2018aaa] is a relatively simple but usually very effective method for obtaining a good rational approximant of the form (3) for a given function . The approximation is sought on a finite discretization and discretization sizes of are not uncommon. The method iteratively selects support points (interpolation nodes) by adding to a previously computed set of support points a new point at which is attained. After that, the rational approximant is calculated by computing weights as the minimizer of the linearized approximation error

(6)

Here, is a Löwner matrix of size defined as

(7)

where . The minimization problem (6) can be solved exactly by SVD at a cost of flops. Since the matrix is only altered in one row and column after each iteration, updating strategies can be used to lower the computational cost of its SVD; see, e.g., [hochman2017fastaaa, lietaert2018automatic]. However, the numerical stability of such updating strategies is not guaranteed.

It is not difficult to extend AAA for the approximation of a vector-valued function . Firstly, the vector-valued version of (3) will map into since the are vector-valued. However, the support points and weights remain scalars. The selection of the support points can still be done greedily by, at iteration , choosing a support point that maximizes on for some norm . In practice, the infinity or Euclidean norm usually work well but more care is sometimes needed when maps to different scales; see [lietaert2018automatic, NGT20]. Likewise, the weights are computed from a block-Löwner matrix where each in (7) is now a column vector of length composed with . The matrix is now of size , increasing the cost of its SVD to flops. When computing these SVDs for degrees as is required by AAA, the cumulative cost is . For large , this becomes prohibitive.

As explained in the introduction, a way to lower the computational cost for s vector-valued function is to work with a scalar surrogate function that hopefully shares the same region of analyticity as . In [EG19] this function was chosen with a tensorized probing vector:

(8)

The reason for this construction with a tensor product is that [EG19] focused on nonlinear eigenvalue problems where with . This allows for an efficient evaluation , which becomes particularly advantageous when fast matrix-vector products with are available. In the case that is in the split form (2), only a matrix-vector product with each is needed. A similar surrogate can be obtained for general by using a full (non-tensorized) probing vector:

(9)

For both surrogate constructions, we apply AAA to or and use the computed support points and weights to define as in (3), replacing the scalar function values by the vectors . Since the surrogate functions are are scalar-valued, the computational burden is clearly much lower than applying the set-valued AAA method to the vector-valued function .

While computationally very attractive, the approach of building a scalar surrogate does unfortunately not always result in very accurate approximants. To illustrate, we consider the matrix-valued function  of the buckling_plate example from [higham2019updated]. In Figure 1 the errors for both surrogates (8) and (9) are shown in blue. While AAA fully resolves each surrogate to an absolute accuracy well below , the absolute errors of the corresponding matrix-valued approximants to stagnate around .

Figure 1: Approximation errors of the rational interpolant for the buckling_plate example as functions of the degree . The error of the surrogate functions ( or probing vectors, tensorized or not) is shown in the left panel, and on the right the error of the full interpolants is shown. While the errors of the surrogates all converge to machine precision (left), the error of the full interpolants stagnates when scalar surrogates () are used (right). With probing vectors the surrogate and full approximants converge similarly. On the right, the error of the set-valued AAA approximant applied the nine components of the original function  is indicated with dark grey dots.

An important observation put forward in this paper is that by taking multiple random probing vectors (and thereby a vector-valued surrogate function ), the approximation error obtained with the set-valued AAA method can be improved, sometimes dramatically. In particular, we consider

(10)

and the tensorized version

(11)

Both probing variants can be sped up if (and hence ) is available in the split form (2) by precomputing the products of the random vectors with the matrices . The tensorized variant is computationally attractive when matrix vector products with can be computed efficiently since is the vectorization of . In both cases we obtain a vector-valued function with components to which the set-valued AAA method can readily be applied. The resulting algorithm sketchAAA is summarized in Algorithm 1.

Nonlinear function . Number of probing vectors . Degree of approximation .
1. Draw a random probing matrix .
2. Define (and possibly precompute on ) the surrogate .
3. Approximate on using the set-valued AAA method, resulting in a rational function in barycentric form with weights and support points .
4. Return the rational approximant of in the form
Algorithm 1 AAA with random sketching (sketchAAA)

Sometimes using as little as probing vectors leads to very satisfactory results. This is indeed the case for our example in Figure 1 where the approximation errors of both surrogates (10) and (11) are shown in orange. Both approximants converge rapidly to an absolute error below . Remarkably, the approximation error of the surrogates is essentially identical to the error of set-valued AAA approximant applied to the original function , which maps to ; see Figure 1 (right).

Since the surrogates are random, the resulting rational approximants are also random. Fortunately, their approximation errors concentrate quickly. This is clearly visible in Figure 2 for the bucking_plate and nep2 examples from the NLEVP collection, showing the the 50-percentiles which are less than wide. As a result, the random construction of the surrogates yields rational approximants that, with high probability, all have very similar approximation errors. Figure 2 also demonstrates that two probing vectors do not always suffice. For the nep2 problem, probing vectors are required for a relative approximation error close to machine precision.

Many more such examples are discussed in section 4 and in almost all cases a relatively small number of probing vectors suffices to obtain accurate approximations. The next section provides some theoretical insights into this.

Figure 2: Bands for the 90 and 50 percentiles of 100 random initializations of the surrogates (non-tensorized) for the buckling_plate and nep2 examples. The dashed line is the minimal error achieved by the set-valued AAA method applied to the original function .

3 Analysis of random sketching for function approximation

In this section we analyze the effect of random sketching on the approximation error. More precisely, we show that for a given fixed approximation of the corresponding approximation of the sketch enjoys, with high probability, similar accuracy. Let us emphasize that this setting does not fully capture Algorithm 1 because the approximation constructed by the algorithm depends on the random matrix in a highly nontrivial way. Nevertheless, we feel that the analysis explains the behavior of the algorithm for increasing

and, more concretely, it allows us to cheaply and safely estimate the full error a posteriori via a surrogate (with an independent sketch).

3.1 Preliminaries

Let us consider a vector-valued function on some domain and with . Assuming we can equivalently view as an element of the tensor product space that induces the linear operator , . Note that is a Hilbert–Schmidt operator of rank at most . Applying the Schmidt decomposition [Hackbusch2019, Theorem 4.137] implies the following result.

Theorem 3.1.

Let

. There exist orthonormal vectors

, orthonormal functions , and scalars such that

We note that the singular values are uniquely defined by . By Theorem 3.1, the norm of on satisfies

where denotes the Euclidean norm of a vector. Extending the corresponding notion for matrices, the stable rank of is defined as and satisfies .

Our analysis applies to an abstract approximation operator defined on some subspace . We assume that commutes with linear maps, that is,

(12)

This property holds for any approximation of the form

for fixed and , with the tacit assumption that functions in allow for point evaluations. In particular, this includes polynomial and rational interpolation provided that the interpolation points and poles are considered fixed. The relation (12) implies for , . With a slight abuse of notation we will simply write .

3.2 Full (non-tensorized) probing vectors

The following theorem treats surrogates of the form (10). It constitutes an extension of existing results [gratton2018improved, MR1337645] on small-sample matrix norm estimation.

Theorem 3.2.

With the notation introduced in section 3.1, let denote the stable rank of for and let be a real (for ) or complex (for ) Gaussian random matrix. Set with for and for . Then for any we have

(13)

where denotes the lower incomplete gamma function and

(14)
Proof.

Applying the Schmidt decomposition from Theorem 3.1 to and using (12) yields

(15)

By the orthonormality of and , it follows that are mutually independent chi-squared variables with degrees of freedom (). Following well-established arguments [gratton2018improved, Roosta-K], we obtain

which proves (13).

The inequality (14) follows directly from the proof of [gratton2018improved, Theorem 3.1], which establishes

and thus implies (14). ∎

To provide some intuition on the implications of Theorem 3.2 for the complex case (), let us first note that

Setting , this shows that the failure probability in (13) is asymptotically proportional to and in turn, increasing will drastically reduce the failure probability provided that . Specifically, for , , we obtain a failure probability of at most for and for . This means that if Algorithm 1 returns an approximant that features a small error for a surrogate with components, then the probability that the approximation error for the original function is more than ten times larger is below . The probability that the error is more than hundred times larger is below . On the other hand, if there exists a good approximant for then (14) shows that it is almost guaranteed that the surrogate function admits a nearly equally good approximant (which is hopefully found by the AAA algorithm). For the setting above, the probability that the error of the surrogate approximant is more than times larger than that of the approximant for the original function is less than .

Remark 3.3.

A large stable rank would lead to a less favourable bound (13), but there is good reason to believe that the stable rank of remains modest in situations of interest. The algorithms discussed in this work are most meaningful when admits good rational approximants. More concretely, this occurs when decreases rapidly as increases, where is a rational function of degree . In fact, decreases exponentially fast when is analytic in an open neighborhood of the target set (this is even true when the infimum is taken over the smaller set of polynomials). As the singular value of is bounded by this implies a rapid decay of the singular values and hence a small stable rank of and, likewise, of .

In practice, it may be of convenient to use a real random matrix for a complex-valued function , for example, if the split form (2) has real matrices and complex-valued . Theorem 3.2 extends to this situation by applying it separately to the real and imaginary parts of and using a union bound.

3.3 Tensorized probing vectors

The analysis of the surrogate (11) with tensorized probing vectors is significantly more complicated because tensorized random vectors fail to remain invariant under orthogonal transformations, an important ingredient in the proof of Theorem 3.2. As a consequence, the following partial results cover the case only. They are direct extensions of analogous results for matrix norm estimation [BujanovicKressner2021].

Theorem 3.4.

In the setting of Theorem 3.2 with , consider for real Gaussian random vectors , . Then for any we have

(16)

and

(17)
Proof.

Setting and applying (15) yields

The last expression has been analyzed in the proof of Theorem 2.2 in [BujanovicKressner2021], showing that it is bounded by the bound claimed in (16). Similarly, it follows from the proof of Theorem 2.4 in [BujanovicKressner2021] that the quantity

is bounded by the bound claimed in (17). ∎

While both failure probability bounds of Theorem 3.4 tend to zero as , the convergence predicted by (16) is rather slow. It remains an open problem to establish better rates for .

4 Applications and numerical experiments

Algorithm 1 was implemented in MATLAB. The set-valued AAA (SV-AAA) we compare to (and also needed in Step 3 of Algorithm 1) is a modification of the original MATLAB code111 https://people.cs.kuleuven.be/karl.meerbergen/files/aaa/autoCORK.zip from [lietaert2018automatic]

. The SV-AAA code implements an updating strategy for the singular value decomposition of the Löwner matrix defined in (

7) to avoid its full recomputation when the degree is increased from to . All default options in SV-AAA are preserved except that the expansion points for AAA are greedily selected based on the maximum norm instead of the -norm. In addition, the stopping condition is based on the relative maximum norm of the approximation that SV-AAA builds over the whole sampling set . So, for example, if SV-AAA is applied to the scalar functions with a stopping tolerance reltol, then the algorithm terminates when the computed rational approximations satisfy

The experiments were run on an Intel i7-12 700 with 64 GB RAM. The software to reproduce our experiments is publicly available.222 https://gitlab.unige.ch/Bart.Vandereycken/sketchAAA

4.1 NLEVP benchmark collection

We first test our approach for the non-polynomial problems in the NLEVP benchmark collection [betcke2013nlevp, higham2019updated] considered in [NGT20]. Table 1 summarizes the key characteristics of these problems, including the matrix size  and the number  of terms in the split form (2). The target set is a disc or half disc specified in a meaningful way for each problem separately; see [NGT20, Table 3] for details. We follow the procedure from [NGT20] for generating the sampling points of the target set: 300 interior points are obtained by randomly perturbing a regular point grid covering a disc or half disc and another 100 points are placed uniformly on the boundary. This gives a total of 400 points for the set .

name size Name size
1 bent_beam 6 16 13 pdde_symmetric 81 3
2 buckling_plate 3 3 14 photonic_crystal 288 3
3 canyon_particle 55 83 15 pillbox_small 20 4
4 clamped_beam_1d 100 3 16 railtrack2_rep 1410 3
5 distributed_delay1 3 4 17 railtrack_rep 1005 3
6 fiber 2400 3 18 sandwich_beam 168 3
7 hadeler 200 3 19 schrodinger_abc 10 4
8 loaded_string 100 3 20 square_root 20 2
9 nep1 2 2 21 time_delay 3 3
10 nep2 3 10 22 time_delay2 2 3
11 nep3 10 4 23 time_delay3 10 3
12 neuron_dde 2 5 24 gun 9956 4
Table 1: Selected benchmark examples from the NLEVP collection.

4.1.1 Small NLEVPs

We first focus on the small problems from the NLEVP collection, that is, the matrix size is below (larger problems are considered separately in the following section). Tables 2 and 3 summarize the results with a stopping tolerance reltol of or , respectively. For each problem we show both the degree and the attained relative -uniform approximation error

of four algorithmic variants:

SV-AAA

refers to the set-valued AAA algorithm applied to the scalar functions in the split form (2) of each problem.

SV-AAA

refers to the set-valued AAA algorithm applied to all entries of the matrix , which is only practical for relatively small problems.

sketchAAA

is used with or and full (non-tensorized) probing vectors. The reported degrees and errors are averaged over 10 random realizations of probing vectors.

We find that in all considered cases, the error achieved by sketchAAA with probing vectors is very close to the stopping tolerance. This is achieved without ever needing a degree significantly higher than that required by SV-AAA  and SV-AAA ; an important practical consideration for solving NLEVPs (see section 4.2.1 below). No timings are reported here because all algorithms return their approximations within a few milliseconds.

SV-AAA SV-AAA sketchAAA sketchAAA
degree relerr degree relerr degree relerr degree relerr
1 9 4.5e-10 8 2.9e-09 5.7 1.3e-04 8 1.8e-08
2 25 3.4e-09 25 3.4e-09 22.9 1.3e-05 25 3.5e-09
3 16 7.9e-11 14 1.6e-09 9.7 1.3e-05 12.3 7.2e-08
4 12 8.7e-09 12 6.6e-09 12 6.0e-06 12.2 6.4e-09
5 8 1.8e-10 8 9.5e-11 7 5.5e-05 7.8 2.3e-09
7 4 7.4e-09 4 7.1e-09 3.8 8.6e-07 4.2 2.3e-08
8 3 3.2e-16 3 3.0e-16 3 5.8e-14 3 2.4e-15
9 22 9.4e-09 21 7.7e-09 21 7.7e-09 21 7.7e-09
10 15 1.7e-09 13 7.1e-09 10.7 2.0e-04 13 2.4e-08
11 10 1.4e-09 9 3.8e-09 9 3.7e-06 9 8.4e-09
12 15 9.8e-09 14 5.5e-09 14 2.6e-07 14 6.2e-09
13 9 2.9e-09 9 3.1e-09 8.2 2.5e-05 9 4.0e-09
14 7 5.7e-11 6 3.8e-11 6.3 1.4e-08 5.3 9.1e-09
15 9 7.0e-11 7 3.8e-09 5.9 3.4e-05 7 3.8e-09
18 12 1.1e-14 6 1.6e-09 5.8 6.9e-08 5.7 4.1e-09
19 13 1.2e-09 12 4.3e-09 11 1.0e-05 12 6.5e-09
20 11 2.2e-09 11 2.8e-09 10.8 2.9e-08 10.9 6.2e-09
21 14 6.3e-09 14 1.3e-09 14 1.5e-09 14 1.3e-09
22 14 5.9e-09 14 4.7e-09 14 1.1e-08 14 4.6e-09
23 18 6.1e-09 14 5.0e-09 14 1.0e-08 14 4.4e-09
Table 2: Small problems of the NLEVP collection. Stopping tolerance reltol is . Sketching is done with full (non-tensorized) vectors.
SV AAA SV AAA sketchAAA sketchAAA
degree relerr degree relerr degree relerr degree relerr
1 12 4.9e-14 11 4.8e-13 7.8 1.7e-06 10.8 1.8e-11
2 30 5.6e-13 30 5.6e-13 26.8 2.2e-07 30 8.3e-13
3 23 1.2e-14 21 2.4e-13 15.3 1.1e-07 19.4 7.2e-12
4 15 4.9e-13 15 5.9e-13 15 3.6e-08 15 5.8e-13
5 10 3.8e-15 9 4.0e-13 9 2.9e-07 9 5.4e-13
7 10 3.0e-13 10 3.3e-13 9 1.0e-08 10 4.9e-13
8 3 3.2e-16 3 3.0e-16 3 5.8e-14 3 2.4e-15
9 27 3.5e-13 27 2.9e-13 27 3.0e-13 27 3.0e-13
10 18 9.5e-14 17 1.5e-13 12.6 2.1e-05 17 1.3e-12
11 13 9.2e-14 12 8.6e-14 11 3.5e-08 12 1.2e-13
12 19 8.0e-14 18 6.6e-14 17 6.5e-09 18 7.7e-14
13 11 7.2e-13 11 6.6e-13 10.6 1.9e-07 11 5.7e-13
14 8 9.4e-16 7 6.7e-16 7.8 8.8e-12 7.3 2.4e-15
15 13 4.6e-15 11 2.0e-13 9.3 6.6e-08 11 1.7e-13
18 16 7.6e-16 11 2.1e-13 10.8 6.0e-10 10.6 5.1e-13
19 15 2.8e-13 15 5.3e-14 13 3.2e-07 14.8 8.9e-13
20 15 3.7e-13 15 5.5e-13 14.9 3.1e-12 15.1 8.8e-13
21 18 8.0e-14 18 4.6e-14 17 7.9e-12 17.8 2.4e-13
22 18 8.0e-14 18 8.6e-14 17 4.3e-10 18 7.7e-14
23 23 1.6e-13 18 2.3e-13 17 2.5e-10 18 2.6e-13
Table 3: Small problems of the NLEVP collection. Stopping tolerance reltol is . Sketching with full (non-tensorized) vectors.

4.1.2 Large NLEVPs

This section considers the large problems from the NLEVP collection listed in Table 1, all with problem sizes above 1000. More precisely, we consider problem 6 (size 2400), 16 (size 1410), 17 (size 1005), and 24 (size 9956). For such problem sizes, the application of set-valued AAA to each component of is no longer feasible and hence this algorithm is not reported. In order to simulate a truly black-box sampling of these eigenvalue problems when using full (non-tensorized) probing vectors as in (10), we use the MATLAB function shown in Algorithm 2. This function obtains the samples without forming the large Gaussian random matrix explicitly. Instead, the sparsity pattern of is inferred on-the-fly as the sampling proceeds. In the case of tensorized probing as in (11), we can exploit sparsity more easily by computing , followed by the computation of where is the th column of .

The execution times are now more significant and reported in Tables 4 and 5, together with the required degree  and the achieved relative approximation error. Table 5 shows that tensorized sketches lead to a faster algorithm with similar accuracy compared to the full sketches reported in Table 4. Like for the small NLEVP problems in the previous section, we find that sketchAAA with probing vectors is reliable and yields an approximation error close to the stopping tolerance reltol.

We note that these four problems are also available in split form (2) and both the non-tensorized and tensorized probing can be further sped up (sometimes significantly) by precomputing the products of the random vectors with the matrices . This is particularly the case for the gun problem number 24 for which sketchAAA spends most of its time on the evaluation of at the support points . Exploiting the split form reduces this time drastically, which can be seen from the rows in Tables 4 and 5 labelled with “24*”. The case is particularly interesting as, coincidentally, the problem also has terms and, in turn, the set-valued approximations for SV-AAA  and sketchAAA both involve four functions. For tensorized sketching (Table 5), sketchAAA is faster than SV-AAA  while returning an accurate approximation of lower degree. This nicely demonstrates that our approach to exploiting the split form is genuinely different from the set-valued AAA algorithm in [lietaert2018automatic]: sketchAAA takes the contributions of the coefficient matrices into account while the set-valued AAA algorithm only approximates the scalar functions  and is blind to . The following section illustrates this further.

function [vals, ind] = sparseprobe(F, Z, ell)
vals = zeros(ell,length(Z)); ind = []; V = [];
for i = 1:length(Z)
    Fz = F(Z(i)); indi = find(Fz);             % evaluate F & find nonzeros
    if ~isequal(indi,ind)                      % reduce use of slow setdiff
        sd = setdiff(indi,ind);
        if ~isempty(sd)                        % any new nonzeros?
            ind = [ind; sd];                   % add them to ind
            V = [V, randn(ell,length(sd))];    % and expand V
            [ind, r] = sort(ind); V = V(:,r);  % preserve index order
        end
    end
    vals(:,i) = V*Fz(ind);                     % probe nonzeros
end
Algorithm 2 MATLAB code for probing a sparse on the target set Z using ell random vectors. Returns samples in vals and the indices of ’s nonzeros in ind.
reltol SV-AAA sketchAAA sketchAAA
degree error degree error degree error
time (s.) time (s.) time (s.)
1e-08 6 14 9.0e-11 8.8 2.7e-05 9.8 2.7e-07
0.014 0.039 0.045
16 3 5.5e-16 3 6.4e-14 3.7 1.4e-14
0.006 0.280 0.299
17 3 3.8e-16 3.9 3.0e-13 3.7 2.1e-14
0.003 0.461 0.482
24 11 2.9e-11 7.3 1.5e-05 8.2 1.9e-08
0.016 1.018 1.097
24* 0.019 0.024
1e-12 6 19 1.5e-14 14.3 3.6e-07 14.9 2.5e-11
0.006 0.041 0.045
16 3 5.5e-16 3 6.4e-14 3.7 1.4e-14
0.007 0.281 0.297
17 3 3.8e-16 3.9 3.0e-13 3.7 2.1e-14
0.002 0.458 0.481
24 15 3.5e-15 11 8.0e-08 12.8 2.6e-12
0.020 1.024 1.100
24* 0.022 0.030
Table 4: Large problems of the NLEVP collection. Sketching with full (non-tensorized) vectors. The split form is not exploited by sketchAAA except for problem 24*.
reltol SV-AAA sketchAAA sketchAAA
degree error degree error degree error
time (s.) time (s.) time (s.)
1e-08 6 14 9.0e-11 9.4 2.0e-05 9.8 2.9e-07
0.014 0.016 0.027
16 3 5.5e-16 3.1 2.6e-14 3.9 1.5e-15
0.006 0.076 0.106
17 3 3.8e-16 3.6 1.2e-14 3.5 3.1e-15
0.003 0.098 0.135
24 11 2.9e-11 6.9 1.4e-05 8.1 2.0e-08
0.016 0.473 0.679
24* 0.009 0.011
1e-12 6 19 1.5e-14 14.1 3.9e-07 15 2.6e-11
0.007 0.017 0.028
16 3 5.5e-16 3.1 2.6e-14 3.9 1.5e-15
0.007 0.076 0.108
17 3 3.8e-16 3.6 1.2e-14 3.5 3.1e-15
0.002 0.097 0.133
24 15 3.5e-15 10.4 3.5e-07 13 4.1e-13
0.020 0.477 0.673
24* 0.013 0.018
Table 5: Large problems of the NLEVP collection. Sketching with tensorized vectors. The split form is not exploited by sketchAAA except for problem 24*.

4.2 An artificial example on the split form

The difference in the approximation degrees returned by SV-AAA  and sketchAAA becomes particularly pronounced when cancellations occur between the different terms of the split form or when the coefficients are of significantly different scales. To demonstrate this effect by an extreme example, consider

where and are random matrices of unit spectral norm. For the split form, we simply take the functions and . The sampling set contains 100 equidistant points in . Since SV-AAA scales the functions to have unit -norm on , the results would remain the same if we took and . In Table 6, we clearly see that SV-AAA overestimates the degree since it puts too much emphasis on resolving .

reltol SV-AAA SV-AAA sketchAAA
degree error degree error degree error
1e-08 24 1.1e-09 8 4.1e-09 8 4.1e-09
1e-12 29 5.0e-13 18 2.5e-13 18 2.9e-13
Table 6: Artificial example demonstrating scaling issues arising with the split form used by SV-AAA . Sketching with full (non-tensorized) vectors. Our method sketchAAA is immune to such effects.

4.2.1 Impact on numerical solution of NLEVP

As explained in [GT17, Section 6] and further analyzed in [NGT20, Section 2], an accurate uniform rational approximation on the target set is crucial for a robust linearization-based NLEVP solver. Once the rational approximant is obtained, it can be linearized in various ways; see, e.g., [lietaert2018automatic, EG19, NGT20]. Specifically, Theorem 3 in [EG19] derives a (strong) linearization with of a rational matrix function in barycentric form (4), as returned by sketchAAA . Eigenvalues of in the target set can then be computed from the linearization, e.g., iteratively by applying a rational Krylov subspace method to . This in turn yields approximations to eigenvalues of . As the size of and, in turn, the cost of this approach increases linearly with , there is clearly an advantage gained from the fact that sketchAAA often yields a rational approximation of smaller degree compared to SV-AAA.

4.3 Scattering problem

We apply sketchAAA to the Helmholtz equation with absorbing boundary conditions describing a scattering problem on the unit disc; see [Pradoverathesis, Sec. 5.5.4]. The vector-valued function containing the solution for a wavenumber is no longer given in split form and depends rationally on :

Here, the stiffness matrix , damping matrix , and mass matrix are real non-symmetric sparse matrices of size 20054, obtained from a finite element discretization. The (complex) entries of contain the nodal values of the finite element solution. The Euclidean norm of for is depicted in Figure 3. Although there are no poles on the real axis, some are quite close to it, resulting in large peaks in . We therefore expect that a rather large degree for the rational approximant of will be needed.

The computational results of sketchAAA applied to are reported in Table 7. The set contains 400 equidistant points in . We observe that a large degree is indeed needed to get high accuracy. While the standard value of is performing decently, a larger sketch size is needed so that the error of the approximant is comparable to that of the surrogate. This behavior is reflected in our analysis in section 3.2: according to Remark 3.3, slower convergence of rational approximations signals larger stable rank, which in turn leads to less favorable probabilistic bounds in Theorem 3.2 that are compensated by increasing . However, let us stress that even for the rational approximation can be computed very quickly and it is still more than times faster than applying SV-AAA without sketching.

The timings in Table 7 do not include the evaluation of and the error on the sampling set , which is needed for all methods regardless of sketching. Since a large linear system has to be solved for each , evaluating is expensive. One of the benefits of rational approximation is that can be evaluated much faster: the most accurate in Table 7 can be evaluated in less than 0.002 seconds, whereas evaluating  requires 0.2 seconds.

reltol time (s.) degree error
1e-08 21.70 36.0 2.2e-09
1 0.017 29.8 8.5e-06
4 0.016 34.9 6.4e-08
8 0.020 35.2 3.7e-08
16 0.029 35.9 6.1e-09
24 0.039 35.9 7.2e-09
1e-12 28.479 43.0 9.6e-13
1 0.012 34.6 1.1e-07
4 0.019 41.1 6.6e-11
8 0.026 42.8 3.6e-12
16 0.038 43.0 1.8e-12
24 0.049 43.1 1.2e-12
Table 7: Scattering problem from section 4.3: time to execute sketchAAA given the evaluation of on , the degree and relative error in the -norm of the resulting approximation for 10 random realizations. The value indicates no sketching and corresponds to SV-AAA applied to the entire vector-valued function .
Figure 3: Scattering problem. Left: the Euclidean norm of as a function of . Right: the relative error for the rational approximants with probing vectors. The dark grey lines indicate the tolerances requested, namely,  and .

4.4 Boundary element matrices

Our last example is a nonlinear eigenvalue problem that arises from the boundary integral formulation of an elliptic PDE eigenvalue problem [Steinbach2009]. More specifically, we consider the 2D Laplace eigenvalue problem on the Fichera corner from [Effenberger2012a, Section 4]. Applying the boundary element method (BEM) to this problem results in a matrix-valued function that is dense and not available in split form. Also, the entries of are expensive to evaluate. Note, however, that hierarchical matrix techniques could be used to significantly accelerate assembly, resulting in a representation of that allows for fast matrix-vector products [SauterSchwab]. Usually, the smallest (real) eigenvalues of are of interest. As the smallest eigenvalue is roughly , we consider in the domain , which is discretized by 200 equidistant points.

The number of boundary elements determines the size of the matrix . We present two sets of numerical experiments depending on whether we can store the evaluations for all sampling points on our machine with 64 GB RAM.

Storage possible

The largest problem size that allows for storing all necessary evaluations of is . The computational results are depicted in Table 8. Like for the scattering problem, we see that larger sketch sizes are needed but sketchAAA remains very fast and accurate. For example, with and for a stopping tolerance of 1e-08, sketchAAA is about 220 times faster than set-valued AAA applied to and achieves comparable accuracy. For a stopping tolerance of 1e-12, sketchAAA is about 350 times faster than set-valued AAA applied to .

In all cases of Table 8, we excluded the 3.2 seconds it took to evaluate for all . Both the set-valued AAA and sketchAAA method require these evaluations.

reltol time (s.) degree error
1e-08 15.147 11.0 1.4e-09
1 0.023 7.4 1.9e-04
4 0.030 10.0 1.3e-07
8 0.045 10.3 2.6e-08
16 0.067 10.8 5.2e-09
1e-12 21.737 14.0 2.0e-13
1 0.023 9.1 2.1e-05
4 0.034 12.9 5.7e-11
8 0.046 13.3 7.1e-12
16 0.061 14.0 2.6e-13
Table 8: BEM problem from section 4.4: time to execute sketchAAA given the evaluation of on , the degree and relative error in the -norm of the resulting sketchAAA approximation for 10 random realizations. The value indicates no sketching and corresponds to SV-AAA applied to . For , full (non-tensorized) sketches where used.
Storage not possible

For larger problems, is evaluated when needed333A considerable part of the cost in assembling the BEM matrix can be amortized when evaluating a few at the same time. We therefore evaluate and store in 20 values at once and perform where possible all computations on before moving on to the next set. but never stored for all . In Table 9 we list the results for tolerance . The degree and time for dense and tensor sketches are very similar and we therefore only show the results for tensor sketches. The errors for both the full and tensorized sketches are shown, even though they are also similar.

We observe that the runtime of the whole algorithm is considerably higher. As expected it grows with . The degree and the error, on the other hand, remain very similar to those for the small problem and the main conclusion remains: for the sketchAAA approximation is not accurate, but already for we obtain satisfactory results at the expense of only slightly increasing the runtime. In addition, even taking a large number of sketches is computationally feasible whereas applying SV-AAA to the original problem is far beyond what is possible on a normal desktop.

size time (s.) degree error error
384 1 26.134 9.4 1.2e-05 2.1e-05
4 30.490 12.9 4.1e-11 5.7e-11
8 31.132 13.4 8.7e-12 6.8e-12
16 31.968 14.0 2.8e-13 2.6e-13
24 32.017 14.0 2.2e-13 2.2e-13
864 1 130.172 9.1 7.6e-06 1.4e-05
4 154.802 13.0 7.6e-11 5.0e-11
8 159.993 13.8 6.6e-12 1.3e-11
16 161.689 14.0 9.0e-13 7.5e-13
24 161.614 14.0 6.9e-13 4.9e-13
1536 1 412.007 9.0 1.4e-05 8.6e-06
4 489.553 12.9 5.5e-10 7.8e-11
8 507.866 13.8 1.2e-11 1.5e-11
16 512.706 14.0 1.4e-12 1.5e-12
24 512.984 14.0 9.3e-13 1.2e-12
2400 1 1010.040 9.1 6.6e-06 1.5e-05
4 1194.473 12.9 2.4e-10 1.3e-10
8 1244.731 13.9 6.7e-12 1.2e-11
16 1250.988 14.0 1.5e-12 1.8e-12
24 1252.066 14.0 1.5e-12 1.4e-12
Table 9: BEM problem from section 4.4: time to run sketchAAA where is evaluated on the fly, and the average degree and relative error in the -norm of the resulting AAA approximation for 10 random realizations. All numbers correspond to tensorized sketches except the numbers in the column error which reports the error obtained with full (non-tensorized) sketches. In all cases the tolerance reltol was .

5 Conclusions

We have presented and analyzed a new randomized sketching approach which allows for the fast and efficient rational approximation of large-scale vector- and matrix-valued functions. Compared to the original surrogate-AAA approach in [EG19], our method sketchAAA reliably achieves high approximation accuracy by using multiple (tensorized or non-tensorized) probing vectors. We have demonstrated the method’s performance on a number of nonlinear functions arising in several applications. Compared to the set-valued AAA method from [lietaert2018automatic], our method works efficiently in the case when the split form of the function to approximate has a large number of terms, and even when the problem is only accessible via function evaluations. We believe that sketchAAA is the first rational approximation method that combines these advantages.

While our focus was on AAA and NLEVPs, let us highlight once more that our sketching approach is not limited to such settings. In principle, any linear (e.g., polynomial or fixed-denominator rational) approximation scheme applied to a large-scale vector-valued function can be accelerated by this approach. For example, current efforts are underway to develop a sketched RKFIT method [berljafa2017rkfit] as a replacement of the surrogate-AAA eigensolver in the MATLAB Rational Krylov Toolbox (the latter of which is currently using only sketching vector).

We believe that there are a number of interesting research directions arising from this work. This includes a possible extension to the multivariate case. A multivariate p-AAA algorithm has recently been proposed in [rodriguez2020p] but it is not immediately clear whether the sketching idea pursued here can be extended to this algorithm. Another potential improvement of sketchAAA in the case of many sampling points is to replace the SVDs for the least-squares problems (6) by another sketching-based least squares solver such as [rokhlin2008fast], similarly to what has been done in [nakatsukasa2022fast]. Finally, we hope that the analysis provided in this paper might shed some more light onto the accuracy of contour integral-based solvers for linear eigenvalue problems . These methods can be viewed as pole finders for the resolvent after random tensorized probing of the form .

Acknowledgments

We thank Davide Pradovera for providing us with the code for the scattering problem considered in section 4.3.

References