DeepAI

# Randomized sketching of nonlinear eigenvalue problems

Rational approximation is a powerful tool to obtain accurate surrogates for nonlinear functions that are easy to evaluate and linearize. The interpolatory adaptive Antoulas–Anderson (AAA) method is one approach to construct such approximants numerically. For large-scale vector- and matrix-valued functions, however, the direct application of the set-valued variant of AAA becomes inefficient. We propose and analyze a new sketching approach for such functions called sketchAAA that, with high probability, leads to much better approximants than previously suggested approaches while retaining efficiency. The sketching approach works in a black-box fashion where only evaluations of the nonlinear function at sampling points are needed. Numerical tests with nonlinear eigenvalue problems illustrate the efficacy of our approach, with speedups above 200 for sampling large-scale black-box functions without sacrificing on accuracy.

• 15 publications
• 31 publications
• 8 publications
10/25/2019

### NEP: a module for the parallel solution of nonlinear eigenvalue problems in SLEPc

SLEPc is a parallel library for the solution of various types of large-s...
04/28/2022

### Benchmarking the Hooke-Jeeves Method, MTS-LS1, and BSrr on the Large-scale BBOB Function Set

This paper investigates the performance of three black-box optimizers ex...
03/13/2020

### Algorithms for the rational approximation of matrix-valued functions

A selection of algorithms for the rational approximation of matrix-value...
06/25/2020

### Derivative Interpolating Subspace Frameworks for Nonlinear Eigenvalue Problems

We first consider the problem of approximating a few eigenvalues of a pr...
06/07/2018

### Conditional probability calculation using restricted Boltzmann machine with application to system identification

There are many advantages to use probability method for nonlinear system...
07/06/2020

### An Iterative Method for Contour-Based Nonlinear Egensolvers

Contour integration techniques have become a popular choice for solving ...
09/27/2022

### Adaptive approximation of nonlinear eigenproblems by minimal rational interpolation

We describe a strategy for solving nonlinear eigenproblems numerically. ...

## 1 Introduction

This work is concerned with approximating vector-valued and matrix-valued functions, with a particular focus on functions that arise in the context of nonlinear eigenvalue problems (NLEVP)

 F(z)v=0,v≠0. (1)

Recent algorithmic advances have made it possible to efficiently compute an accurate rational approximant of a scalar function on a compact (and usually discrete) target set  in the complex plane . Some of the available methods are the adaptive Antoulas–Anderson (AAA) algorithm [nakatsukasa2018aaa], the rational Krylov fitting (RKFIT) algorithm [berljafa2017rkfit], vector fitting [gustavsen1999rational]

, minimal rational interpolation

[Pradovera2020], and methods based on Löwner matrices [mayo2007framework]. All of these methods can be used or adapted to approximate multiple scalar functions on the target set simultaneously. In particular, there are several variants of AAA that approximate multiple functions by a family of rational interpolants sharing the same denominator, including the set-valued AAA algorithm [lietaert2018automatic], fastAAA [hochman2017fastaaa], and weighted AAA [NGT20]. See also [EG21] for a discussion and comparison of some of these methods.

AAA-type algorithms can be used with very little user input and have enabled an almost black-box approximation of NLEVPs or transfer functions in model order reduction. However, the computation of degree- rational interpolants via the set-valued AAA algorithm involves the solution of  least squares problems with dense matrices of sizes  for varying , adding up to complexity. The greedy search of interpolation nodes in AAA also requires the repeated evaluation of the  rational interpolants at all sampling points in  and the storage of the corresponding function values. As a consequence, the main use case for the multiple-function AAA variants to date have been problems that can be written in the split form

 F(z)=s∑i=1fi(z)Ai, (2)

where is small (say, in the order of ), are known scalar functions, and are fixed coefficient matrices. While, in principle, it is always possible to write an arbitrary matrix-valued function  in split form using terms, it would be prohibitive to apply the set-valued AAA approach to large-scale problems in such a naive way.

The work [EG19] suggested an alternative approach where the original (scalar-valued) AAA algorithm is applied to a scalar surrogate with random probing vectors and , resulting in a rational interpolant in barycentric form

 r(d)(z)=d∑i=0wif(zi)z−zi/d∑i=0wiz−zi (3)

with support points  (the interpolation nodes) and weights . Using this representation, a rational interpolant  of the original function is then obtained by replacing in (3) every occurrence of by the evaluation :

 R(d)(z)=d∑i=0wiF(zi)z−zi/d∑i=0wiz−zi. (4)

The intuition is that both and will almost surely have the same region of analyticity, hence interpolating  using the same interpolation nodes and poles as for  should result in a good approximant. This surrogate approach indeed alleviates the complexity and memory issues even when has a large number of terms in its split form (2), and it can also be applied if is only available as a black-box function returning evaluations . However, a comprehensive numerical comparison in [NGT20] in the context of solving NLEVPs (1) has revealed that this surrogate approach is not always reliable and may lead to poor accuracy. Indeed, the -uniform error for the original problem

 ∥F−R(d)∥Σ:=maxz∈Σ∥F(z)−R(d)(z)∥F (5)

can be significantly larger than the error for the scalar surrogate problem . In order to mitigate this issue for black-box functions —i.e., those not available in split form (2)—a two-phase surrogate AAA algorithm with cyclic Leja–Bagby refinement has been developed in [NGT20]. While this algorithm is indeed robust in the sense that it returns a rational approximant with user-specified accuracy, it is computationally more expensive than the set-valued AAA algorithm and sometimes returns approximants of unnecessarily high degree; see, e.g., [NGT20, Table 5.2].

In this work, we propose and analyze a new sketching approach for matrix-valued functions called sketchAAA that, with high probability, leads to much better approximants than the scalar surrogate AAA approach. At the same time, it remains equally efficient. While we demonstrate the benefits of the sketching approach in combination with the set-valued AAA algorithm and mainly test it on functions from the updated NLEVP benchmark collection [betcke2013nlevp, higham2019updated], the same technique can be combined with other linear interpolation techniques (including polynomial interpolation at adaptively chosen nodes). Our sketching approach is not limited to nonlinear eigenvalue problems either and can be used for the approximation of any vector-valued function. The key idea is to sketch using a thin (structured or unstructured) random probing matrix , i.e., computing samples of the form , and then to apply the set-valued AAA algorithm to the  components of the samples. We provide a probabilistic assessment of the approximation quality of the resulting samples by building on the improved small-sample bounds of the matrix Frobenius norm in [gratton2018improved].

The remainder of this work is organized as follows. In section 2 we briefly review the AAA algorithm [nakatsukasa2018aaa] and the surrogate approach from [EG19]

, and then introduce our new probing forms using either tensorized or non-tensorized probing vectors. We also provide a pseudocode of the resulting

sketchAAA algorithm. Section 3 is devoted to the analysis of the approximants produced by sketchAAA . Our main result for the case of non-tensorized probing, Theorem 3.2, provides an exponentially converging (in the number of samples ) probabilistic upper bound on the approximation error of the sketched problem compared to the approximation of the full problem. We also provide a weaker result for tensorized probing in the case of , covering the original surrogate approach in [EG19]. Several numerical tests in section 4 on small to large-scale problems show that our new sketching approach is reliable and efficient. For some of the large-scale black-box problems we report speedup factors above 200 compared to the set-valued AAA algorithm while retaining comparable accuracy.

## 2 Surrogate functions and the AAA algorithm

The AAA algorithm [nakatsukasa2018aaa] is a relatively simple but usually very effective method for obtaining a good rational approximant of the form (3) for a given function . The approximation is sought on a finite discretization and discretization sizes of are not uncommon. The method iteratively selects support points (interpolation nodes) by adding to a previously computed set of support points a new point at which is attained. After that, the rational approximant is calculated by computing weights as the minimizer of the linearized approximation error

 minw∈Cd+1∥Lw∥2such that∥w∥2=1. (6)

Here, is a Löwner matrix of size defined as

 Lij=f(^zi)−f(zj)^zi−zj (7)

where . The minimization problem (6) can be solved exactly by SVD at a cost of flops. Since the matrix is only altered in one row and column after each iteration, updating strategies can be used to lower the computational cost of its SVD; see, e.g., [hochman2017fastaaa, lietaert2018automatic]. However, the numerical stability of such updating strategies is not guaranteed.

It is not difficult to extend AAA for the approximation of a vector-valued function . Firstly, the vector-valued version of (3) will map into since the are vector-valued. However, the support points and weights remain scalars. The selection of the support points can still be done greedily by, at iteration , choosing a support point that maximizes on for some norm . In practice, the infinity or Euclidean norm usually work well but more care is sometimes needed when maps to different scales; see [lietaert2018automatic, NGT20]. Likewise, the weights are computed from a block-Löwner matrix where each in (7) is now a column vector of length composed with . The matrix is now of size , increasing the cost of its SVD to flops. When computing these SVDs for degrees as is required by AAA, the cumulative cost is . For large , this becomes prohibitive.

As explained in the introduction, a way to lower the computational cost for s vector-valued function is to work with a scalar surrogate function that hopefully shares the same region of analyticity as . In [EG19] this function was chosen with a tensorized probing vector:

 gtens(z)=(v⊗u)Tf(z)with % fixed random vectors u,v∈Cn. (8)

The reason for this construction with a tensor product is that [EG19] focused on nonlinear eigenvalue problems where with . This allows for an efficient evaluation , which becomes particularly advantageous when fast matrix-vector products with are available. In the case that is in the split form (2), only a matrix-vector product with each is needed. A similar surrogate can be obtained for general by using a full (non-tensorized) probing vector:

 gfull(z)=vTf(z)with a fixed random % vector v∈CN. (9)

For both surrogate constructions, we apply AAA to or and use the computed support points and weights to define as in (3), replacing the scalar function values by the vectors . Since the surrogate functions are are scalar-valued, the computational burden is clearly much lower than applying the set-valued AAA method to the vector-valued function .

While computationally very attractive, the approach of building a scalar surrogate does unfortunately not always result in very accurate approximants. To illustrate, we consider the matrix-valued function  of the buckling_plate example from [higham2019updated]. In Figure 1 the errors for both surrogates (8) and (9) are shown in blue. While AAA fully resolves each surrogate to an absolute accuracy well below , the absolute errors of the corresponding matrix-valued approximants to stagnate around .

An important observation put forward in this paper is that by taking multiple random probing vectors (and thereby a vector-valued surrogate function ), the approximation error obtained with the set-valued AAA method can be improved, sometimes dramatically. In particular, we consider

 gℓ,full(z)=VTf(z)with a random % matrix V∈CN×ℓ (10)

and the tensorized version

 gℓ,tens(z)=[v1⊗u1,…,vℓ⊗uℓ]Tf(z)with random% vectors ui,vi∈Cn. (11)

Both probing variants can be sped up if (and hence ) is available in the split form (2) by precomputing the products of the random vectors with the matrices . The tensorized variant is computationally attractive when matrix vector products with can be computed efficiently since is the vectorization of . In both cases we obtain a vector-valued function with components to which the set-valued AAA method can readily be applied. The resulting algorithm sketchAAA is summarized in Algorithm 1.

Sometimes using as little as probing vectors leads to very satisfactory results. This is indeed the case for our example in Figure 1 where the approximation errors of both surrogates (10) and (11) are shown in orange. Both approximants converge rapidly to an absolute error below . Remarkably, the approximation error of the surrogates is essentially identical to the error of set-valued AAA approximant applied to the original function , which maps to ; see Figure 1 (right).

Since the surrogates are random, the resulting rational approximants are also random. Fortunately, their approximation errors concentrate quickly. This is clearly visible in Figure 2 for the bucking_plate and nep2 examples from the NLEVP collection, showing the the 50-percentiles which are less than wide. As a result, the random construction of the surrogates yields rational approximants that, with high probability, all have very similar approximation errors. Figure 2 also demonstrates that two probing vectors do not always suffice. For the nep2 problem, probing vectors are required for a relative approximation error close to machine precision.

Many more such examples are discussed in section 4 and in almost all cases a relatively small number of probing vectors suffices to obtain accurate approximations. The next section provides some theoretical insights into this.

## 3 Analysis of random sketching for function approximation

In this section we analyze the effect of random sketching on the approximation error. More precisely, we show that for a given fixed approximation of the corresponding approximation of the sketch enjoys, with high probability, similar accuracy. Let us emphasize that this setting does not fully capture Algorithm 1 because the approximation constructed by the algorithm depends on the random matrix in a highly nontrivial way. Nevertheless, we feel that the analysis explains the behavior of the algorithm for increasing

and, more concretely, it allows us to cheaply and safely estimate the full error a posteriori via a surrogate (with an independent sketch).

### 3.1 Preliminaries

Let us consider a vector-valued function on some domain and with . Assuming we can equivalently view as an element of the tensor product space that induces the linear operator , . Note that is a Hilbert–Schmidt operator of rank at most . Applying the Schmidt decomposition [Hackbusch2019, Theorem 4.137] implies the following result.

###### Theorem 3.1.

Let

. There exist orthonormal vectors

, orthonormal functions , and scalars such that

 h(z)=N∑j=1σjujvj(z).

We note that the singular values are uniquely defined by . By Theorem 3.1, the norm of on satisfies

 ∥h∥2:=∫Ω∥h(z)∥22dz=σ21+⋯+σ2N,

where denotes the Euclidean norm of a vector. Extending the corresponding notion for matrices, the stable rank of is defined as and satisfies .

Our analysis applies to an abstract approximation operator defined on some subspace . We assume that commutes with linear maps, that is,

 A(Bf)=BA(f),∀B∈KN×N. (12)

This property holds for any approximation of the form

 A(f)=d∑i=0f(zi)Li(⋅)

for fixed and , with the tacit assumption that functions in allow for point evaluations. In particular, this includes polynomial and rational interpolation provided that the interpolation points and poles are considered fixed. The relation (12) implies for , . With a slight abuse of notation we will simply write .

### 3.2 Full (non-tensorized) probing vectors

The following theorem treats surrogates of the form (10). It constitutes an extension of existing results [gratton2018improved, MR1337645] on small-sample matrix norm estimation.

###### Theorem 3.2.

With the notation introduced in section 3.1, let denote the stable rank of for and let be a real (for ) or complex (for ) Gaussian random matrix. Set with for and for . Then for any we have

 (13)

where denotes the lower incomplete gamma function and

 Prob(∥f−A(f)∥≤τ−1∥VTf−A(VTf)∥)≤exp(−cℓ2ρ(τ−1)2). (14)
###### Proof.

Applying the Schmidt decomposition from Theorem 3.1 to and using (12) yields

 ∥VTf−A(VTf)∥2=∥VTh∥2=N∑j=1σ2j∥VTuj∥22. (15)

By the orthonormality of and , it follows that are mutually independent chi-squared variables with ). Following well-established arguments [gratton2018improved, Roosta-K], we obtain

 Prob(∥h∥≥τ∥VTh∥) = Prob(cℓ∥VTh∥2≤cℓτ−2∥h∥2) = Prob(N∑j=1σ2jχ2j(cℓ)≤cℓτ−2∥h∥2) ≤ Prob(σ21χ21(cℓ)≤cℓτ−2∥h∥2) = Prob(χ21(cℓ)≤cℓτ−2ρ) = 1Γ(cℓ/2)γ(cℓ/2,cℓρτ−2/2),

which proves (13).

The inequality (14) follows directly from the proof of [gratton2018improved, Theorem 3.1], which establishes

 Prob(1cℓN∑j=1σ2jχ2j(cℓ)≥τ∥h∥)≤exp(−cℓρ(τ−1)2/2)

and thus implies (14). ∎

To provide some intuition on the implications of Theorem 3.2 for the complex case (), let us first note that

 Γ(k)=(k−1)!,γ(k,α)=∫α0tk−1e−tdt≈(α2)k for α≈0.

Setting , this shows that the failure probability in (13) is asymptotically proportional to and in turn, increasing will drastically reduce the failure probability provided that . Specifically, for , , we obtain a failure probability of at most for and for . This means that if Algorithm 1 returns an approximant that features a small error for a surrogate with components, then the probability that the approximation error for the original function is more than ten times larger is below . The probability that the error is more than hundred times larger is below . On the other hand, if there exists a good approximant for then (14) shows that it is almost guaranteed that the surrogate function admits a nearly equally good approximant (which is hopefully found by the AAA algorithm). For the setting above, the probability that the error of the surrogate approximant is more than times larger than that of the approximant for the original function is less than .

###### Remark 3.3.

A large stable rank would lead to a less favourable bound (13), but there is good reason to believe that the stable rank of remains modest in situations of interest. The algorithms discussed in this work are most meaningful when admits good rational approximants. More concretely, this occurs when decreases rapidly as increases, where is a rational function of degree . In fact, decreases exponentially fast when is analytic in an open neighborhood of the target set (this is even true when the infimum is taken over the smaller set of polynomials). As the singular value of is bounded by this implies a rapid decay of the singular values and hence a small stable rank of and, likewise, of .

In practice, it may be of convenient to use a real random matrix for a complex-valued function , for example, if the split form (2) has real matrices and complex-valued . Theorem 3.2 extends to this situation by applying it separately to the real and imaginary parts of and using a union bound.

### 3.3 Tensorized probing vectors

The analysis of the surrogate (11) with tensorized probing vectors is significantly more complicated because tensorized random vectors fail to remain invariant under orthogonal transformations, an important ingredient in the proof of Theorem 3.2. As a consequence, the following partial results cover the case only. They are direct extensions of analogous results for matrix norm estimation [BujanovicKressner2021].

###### Theorem 3.4.

In the setting of Theorem 3.2 with , consider for real Gaussian random vectors , . Then for any we have

 Prob(∥f−A(f)∥≥τ√ρ∥VTf−A(VTf)∥)≤2π(2+ln(1+2τ))τ−1 (16)

and

 Prob(∥f−A(f)∥≤τ−1∥VTf−A(VTf)∥)≤√2τexp(−τ+2). (17)
###### Proof.

Setting and applying (15) yields

 Prob(∥h∥≥τ√ρ∥VTh∥) = Prob(σ21≥τ2N∑j=1σ2j∥VTuj∥22) ≤ Prob(∥VTu1∥22≤τ−2).

The last expression has been analyzed in the proof of Theorem 2.2 in [BujanovicKressner2021], showing that it is bounded by the bound claimed in (16). Similarly, it follows from the proof of Theorem 2.4 in [BujanovicKressner2021] that the quantity

 Prob(∥h∥≤τ−1∥VTh∥)=Prob(τ2N∑j=1σ2j≤N∑j=1σ2j∥VTuj∥22)

is bounded by the bound claimed in (17). ∎

While both failure probability bounds of Theorem 3.4 tend to zero as , the convergence predicted by (16) is rather slow. It remains an open problem to establish better rates for .

## 4 Applications and numerical experiments

Algorithm 1 was implemented in MATLAB. The set-valued AAA (SV-AAA) we compare to (and also needed in Step 3 of Algorithm 1) is a modification of the original MATLAB code from [lietaert2018automatic]

. The SV-AAA code implements an updating strategy for the singular value decomposition of the Löwner matrix defined in (

7) to avoid its full recomputation when the degree is increased from to . All default options in SV-AAA are preserved except that the expansion points for AAA are greedily selected based on the maximum norm instead of the -norm. In addition, the stopping condition is based on the relative maximum norm of the approximation that SV-AAA builds over the whole sampling set . So, for example, if SV-AAA is applied to the scalar functions with a stopping tolerance reltol, then the algorithm terminates when the computed rational approximations satisfy

 maxz∈Σmaxi|fi(z)−r(d)i(z)|maxz∈Σmaxi|fi(z)|≤reltol.

The experiments were run on an Intel i7-12 700 with 64 GB RAM. The software to reproduce our experiments is publicly available.

### 4.1 NLEVP benchmark collection

We first test our approach for the non-polynomial problems in the NLEVP benchmark collection [betcke2013nlevp, higham2019updated] considered in [NGT20]. Table 1 summarizes the key characteristics of these problems, including the matrix size  and the number  of terms in the split form (2). The target set is a disc or half disc specified in a meaningful way for each problem separately; see [NGT20, Table 3] for details. We follow the procedure from [NGT20] for generating the sampling points of the target set: 300 interior points are obtained by randomly perturbing a regular point grid covering a disc or half disc and another 100 points are placed uniformly on the boundary. This gives a total of 400 points for the set .

#### 4.1.1 Small NLEVPs

We first focus on the small problems from the NLEVP collection, that is, the matrix size is below (larger problems are considered separately in the following section). Tables 2 and 3 summarize the results with a stopping tolerance reltol of or , respectively. For each problem we show both the degree and the attained relative -uniform approximation error

 relerr=maxz∈Σmaxij|Fij(z)−R(d)ij(z)|maxz∈Σmaxij|Fij(z)|

of four algorithmic variants:

SV-AAA

refers to the set-valued AAA algorithm applied to the scalar functions in the split form (2) of each problem.

SV-AAA

refers to the set-valued AAA algorithm applied to all entries of the matrix , which is only practical for relatively small problems.

sketchAAA

is used with or and full (non-tensorized) probing vectors. The reported degrees and errors are averaged over 10 random realizations of probing vectors.

We find that in all considered cases, the error achieved by sketchAAA with probing vectors is very close to the stopping tolerance. This is achieved without ever needing a degree significantly higher than that required by SV-AAA  and SV-AAA ; an important practical consideration for solving NLEVPs (see section 4.2.1 below). No timings are reported here because all algorithms return their approximations within a few milliseconds.

#### 4.1.2 Large NLEVPs

This section considers the large problems from the NLEVP collection listed in Table 1, all with problem sizes above 1000. More precisely, we consider problem 6 (size 2400), 16 (size 1410), 17 (size 1005), and 24 (size 9956). For such problem sizes, the application of set-valued AAA to each component of is no longer feasible and hence this algorithm is not reported. In order to simulate a truly black-box sampling of these eigenvalue problems when using full (non-tensorized) probing vectors as in (10), we use the MATLAB function shown in Algorithm 2. This function obtains the samples without forming the large Gaussian random matrix explicitly. Instead, the sparsity pattern of is inferred on-the-fly as the sampling proceeds. In the case of tensorized probing as in (11), we can exploit sparsity more easily by computing , followed by the computation of where is the th column of .

The execution times are now more significant and reported in Tables 4 and 5, together with the required degree  and the achieved relative approximation error. Table 5 shows that tensorized sketches lead to a faster algorithm with similar accuracy compared to the full sketches reported in Table 4. Like for the small NLEVP problems in the previous section, we find that sketchAAA with probing vectors is reliable and yields an approximation error close to the stopping tolerance reltol.

We note that these four problems are also available in split form (2) and both the non-tensorized and tensorized probing can be further sped up (sometimes significantly) by precomputing the products of the random vectors with the matrices . This is particularly the case for the gun problem number 24 for which sketchAAA spends most of its time on the evaluation of at the support points . Exploiting the split form reduces this time drastically, which can be seen from the rows in Tables 4 and 5 labelled with “24*”. The case is particularly interesting as, coincidentally, the problem also has terms and, in turn, the set-valued approximations for SV-AAA  and sketchAAA both involve four functions. For tensorized sketching (Table 5), sketchAAA is faster than SV-AAA  while returning an accurate approximation of lower degree. This nicely demonstrates that our approach to exploiting the split form is genuinely different from the set-valued AAA algorithm in [lietaert2018automatic]: sketchAAA takes the contributions of the coefficient matrices into account while the set-valued AAA algorithm only approximates the scalar functions  and is blind to . The following section illustrates this further.

### 4.2 An artificial example on the split form

The difference in the approximation degrees returned by SV-AAA  and sketchAAA becomes particularly pronounced when cancellations occur between the different terms of the split form or when the coefficients are of significantly different scales. To demonstrate this effect by an extreme example, consider

 F(z)=|z|⋅10−8B+sin(πz)C,

where and are random matrices of unit spectral norm. For the split form, we simply take the functions and . The sampling set contains 100 equidistant points in . Since SV-AAA scales the functions to have unit -norm on , the results would remain the same if we took and . In Table 6, we clearly see that SV-AAA overestimates the degree since it puts too much emphasis on resolving .

#### 4.2.1 Impact on numerical solution of NLEVP

As explained in [GT17, Section 6] and further analyzed in [NGT20, Section 2], an accurate uniform rational approximation on the target set is crucial for a robust linearization-based NLEVP solver. Once the rational approximant is obtained, it can be linearized in various ways; see, e.g., [lietaert2018automatic, EG19, NGT20]. Specifically, Theorem 3 in [EG19] derives a (strong) linearization with of a rational matrix function in barycentric form (4), as returned by sketchAAA . Eigenvalues of in the target set can then be computed from the linearization, e.g., iteratively by applying a rational Krylov subspace method to . This in turn yields approximations to eigenvalues of . As the size of and, in turn, the cost of this approach increases linearly with , there is clearly an advantage gained from the fact that sketchAAA often yields a rational approximation of smaller degree compared to SV-AAA.

### 4.3 Scattering problem

We apply sketchAAA to the Helmholtz equation with absorbing boundary conditions describing a scattering problem on the unit disc; see [Pradoverathesis, Sec. 5.5.4]. The vector-valued function containing the solution for a wavenumber is no longer given in split form and depends rationally on :

 f(z)=(K−izC−z2M)−1b.

Here, the stiffness matrix , damping matrix , and mass matrix are real non-symmetric sparse matrices of size 20054, obtained from a finite element discretization. The (complex) entries of contain the nodal values of the finite element solution. The Euclidean norm of for is depicted in Figure 3. Although there are no poles on the real axis, some are quite close to it, resulting in large peaks in . We therefore expect that a rather large degree for the rational approximant of will be needed.

The computational results of sketchAAA applied to are reported in Table 7. The set contains 400 equidistant points in . We observe that a large degree is indeed needed to get high accuracy. While the standard value of is performing decently, a larger sketch size is needed so that the error of the approximant is comparable to that of the surrogate. This behavior is reflected in our analysis in section 3.2: according to Remark 3.3, slower convergence of rational approximations signals larger stable rank, which in turn leads to less favorable probabilistic bounds in Theorem 3.2 that are compensated by increasing . However, let us stress that even for the rational approximation can be computed very quickly and it is still more than times faster than applying SV-AAA without sketching.

The timings in Table 7 do not include the evaluation of and the error on the sampling set , which is needed for all methods regardless of sketching. Since a large linear system has to be solved for each , evaluating is expensive. One of the benefits of rational approximation is that can be evaluated much faster: the most accurate in Table 7 can be evaluated in less than 0.002 seconds, whereas evaluating  requires 0.2 seconds.

### 4.4 Boundary element matrices

Our last example is a nonlinear eigenvalue problem that arises from the boundary integral formulation of an elliptic PDE eigenvalue problem [Steinbach2009]. More specifically, we consider the 2D Laplace eigenvalue problem on the Fichera corner from [Effenberger2012a, Section 4]. Applying the boundary element method (BEM) to this problem results in a matrix-valued function that is dense and not available in split form. Also, the entries of are expensive to evaluate. Note, however, that hierarchical matrix techniques could be used to significantly accelerate assembly, resulting in a representation of that allows for fast matrix-vector products [SauterSchwab]. Usually, the smallest (real) eigenvalues of are of interest. As the smallest eigenvalue is roughly , we consider in the domain , which is discretized by 200 equidistant points.

The number of boundary elements determines the size of the matrix . We present two sets of numerical experiments depending on whether we can store the evaluations for all sampling points on our machine with 64 GB RAM.

##### Storage possible

The largest problem size that allows for storing all necessary evaluations of is . The computational results are depicted in Table 8. Like for the scattering problem, we see that larger sketch sizes are needed but sketchAAA remains very fast and accurate. For example, with and for a stopping tolerance of 1e-08, sketchAAA is about 220 times faster than set-valued AAA applied to and achieves comparable accuracy. For a stopping tolerance of 1e-12, sketchAAA is about 350 times faster than set-valued AAA applied to .

In all cases of Table 8, we excluded the 3.2 seconds it took to evaluate for all . Both the set-valued AAA and sketchAAA method require these evaluations.

##### Storage not possible

For larger problems, is evaluated when needed333A considerable part of the cost in assembling the BEM matrix can be amortized when evaluating a few at the same time. We therefore evaluate and store in 20 values at once and perform where possible all computations on before moving on to the next set. but never stored for all . In Table 9 we list the results for tolerance . The degree and time for dense and tensor sketches are very similar and we therefore only show the results for tensor sketches. The errors for both the full and tensorized sketches are shown, even though they are also similar.

We observe that the runtime of the whole algorithm is considerably higher. As expected it grows with . The degree and the error, on the other hand, remain very similar to those for the small problem and the main conclusion remains: for the sketchAAA approximation is not accurate, but already for we obtain satisfactory results at the expense of only slightly increasing the runtime. In addition, even taking a large number of sketches is computationally feasible whereas applying SV-AAA to the original problem is far beyond what is possible on a normal desktop.

## 5 Conclusions

We have presented and analyzed a new randomized sketching approach which allows for the fast and efficient rational approximation of large-scale vector- and matrix-valued functions. Compared to the original surrogate-AAA approach in [EG19], our method sketchAAA reliably achieves high approximation accuracy by using multiple (tensorized or non-tensorized) probing vectors. We have demonstrated the method’s performance on a number of nonlinear functions arising in several applications. Compared to the set-valued AAA method from [lietaert2018automatic], our method works efficiently in the case when the split form of the function to approximate has a large number of terms, and even when the problem is only accessible via function evaluations. We believe that sketchAAA is the first rational approximation method that combines these advantages.

While our focus was on AAA and NLEVPs, let us highlight once more that our sketching approach is not limited to such settings. In principle, any linear (e.g., polynomial or fixed-denominator rational) approximation scheme applied to a large-scale vector-valued function can be accelerated by this approach. For example, current efforts are underway to develop a sketched RKFIT method [berljafa2017rkfit] as a replacement of the surrogate-AAA eigensolver in the MATLAB Rational Krylov Toolbox (the latter of which is currently using only sketching vector).

We believe that there are a number of interesting research directions arising from this work. This includes a possible extension to the multivariate case. A multivariate p-AAA algorithm has recently been proposed in [rodriguez2020p] but it is not immediately clear whether the sketching idea pursued here can be extended to this algorithm. Another potential improvement of sketchAAA in the case of many sampling points is to replace the SVDs for the least-squares problems (6) by another sketching-based least squares solver such as [rokhlin2008fast], similarly to what has been done in [nakatsukasa2022fast]. Finally, we hope that the analysis provided in this paper might shed some more light onto the accuracy of contour integral-based solvers for linear eigenvalue problems . These methods can be viewed as pole finders for the resolvent after random tensorized probing of the form .

## Acknowledgments

We thank Davide Pradovera for providing us with the code for the scattering problem considered in section 4.3.