1 Introduction
This work is concerned with approximating vectorvalued and matrixvalued functions, with a particular focus on functions that arise in the context of nonlinear eigenvalue problems (NLEVP)
(1) 
Recent algorithmic advances have made it possible to efficiently compute an accurate rational approximant of a scalar function on a compact (and usually discrete) target set in the complex plane . Some of the available methods are the adaptive Antoulas–Anderson (AAA) algorithm [nakatsukasa2018aaa], the rational Krylov fitting (RKFIT) algorithm [berljafa2017rkfit], vector fitting [gustavsen1999rational]
, minimal rational interpolation
[Pradovera2020], and methods based on Löwner matrices [mayo2007framework]. All of these methods can be used or adapted to approximate multiple scalar functions on the target set simultaneously. In particular, there are several variants of AAA that approximate multiple functions by a family of rational interpolants sharing the same denominator, including the setvalued AAA algorithm [lietaert2018automatic], fastAAA [hochman2017fastaaa], and weighted AAA [NGT20]. See also [EG21] for a discussion and comparison of some of these methods.AAAtype algorithms can be used with very little user input and have enabled an almost blackbox approximation of NLEVPs or transfer functions in model order reduction. However, the computation of degree rational interpolants via the setvalued AAA algorithm involves the solution of least squares problems with dense matrices of sizes for varying , adding up to complexity. The greedy search of interpolation nodes in AAA also requires the repeated evaluation of the rational interpolants at all sampling points in and the storage of the corresponding function values. As a consequence, the main use case for the multiplefunction AAA variants to date have been problems that can be written in the split form
(2) 
where is small (say, in the order of ), are known scalar functions, and are fixed coefficient matrices. While, in principle, it is always possible to write an arbitrary matrixvalued function in split form using terms, it would be prohibitive to apply the setvalued AAA approach to largescale problems in such a naive way.
The work [EG19] suggested an alternative approach where the original (scalarvalued) AAA algorithm is applied to a scalar surrogate with random probing vectors and , resulting in a rational interpolant in barycentric form
(3) 
with support points (the interpolation nodes) and weights . Using this representation, a rational interpolant of the original function is then obtained by replacing in (3) every occurrence of by the evaluation :
(4) 
The intuition is that both and will almost surely have the same region of analyticity, hence interpolating using the same interpolation nodes and poles as for should result in a good approximant. This surrogate approach indeed alleviates the complexity and memory issues even when has a large number of terms in its split form (2), and it can also be applied if is only available as a blackbox function returning evaluations . However, a comprehensive numerical comparison in [NGT20] in the context of solving NLEVPs (1) has revealed that this surrogate approach is not always reliable and may lead to poor accuracy. Indeed, the uniform error for the original problem
(5) 
can be significantly larger than the error for the scalar surrogate problem . In order to mitigate this issue for blackbox functions —i.e., those not available in split form (2)—a twophase surrogate AAA algorithm with cyclic Leja–Bagby refinement has been developed in [NGT20]. While this algorithm is indeed robust in the sense that it returns a rational approximant with userspecified accuracy, it is computationally more expensive than the setvalued AAA algorithm and sometimes returns approximants of unnecessarily high degree; see, e.g., [NGT20, Table 5.2].
In this work, we propose and analyze a new sketching approach for matrixvalued functions called sketchAAA that, with high probability, leads to much better approximants than the scalar surrogate AAA approach. At the same time, it remains equally efficient. While we demonstrate the benefits of the sketching approach in combination with the setvalued AAA algorithm and mainly test it on functions from the updated NLEVP benchmark collection [betcke2013nlevp, higham2019updated], the same technique can be combined with other linear interpolation techniques (including polynomial interpolation at adaptively chosen nodes). Our sketching approach is not limited to nonlinear eigenvalue problems either and can be used for the approximation of any vectorvalued function. The key idea is to sketch using a thin (structured or unstructured) random probing matrix , i.e., computing samples of the form , and then to apply the setvalued AAA algorithm to the components of the samples. We provide a probabilistic assessment of the approximation quality of the resulting samples by building on the improved smallsample bounds of the matrix Frobenius norm in [gratton2018improved].
The remainder of this work is organized as follows. In section 2 we briefly review the AAA algorithm [nakatsukasa2018aaa] and the surrogate approach from [EG19]
, and then introduce our new probing forms using either tensorized or nontensorized probing vectors. We also provide a pseudocode of the resulting
sketchAAA algorithm. Section 3 is devoted to the analysis of the approximants produced by sketchAAA . Our main result for the case of nontensorized probing, Theorem 3.2, provides an exponentially converging (in the number of samples ) probabilistic upper bound on the approximation error of the sketched problem compared to the approximation of the full problem. We also provide a weaker result for tensorized probing in the case of , covering the original surrogate approach in [EG19]. Several numerical tests in section 4 on small to largescale problems show that our new sketching approach is reliable and efficient. For some of the largescale blackbox problems we report speedup factors above 200 compared to the setvalued AAA algorithm while retaining comparable accuracy.2 Surrogate functions and the AAA algorithm
The AAA algorithm [nakatsukasa2018aaa] is a relatively simple but usually very effective method for obtaining a good rational approximant of the form (3) for a given function . The approximation is sought on a finite discretization and discretization sizes of – are not uncommon. The method iteratively selects support points (interpolation nodes) by adding to a previously computed set of support points a new point at which is attained. After that, the rational approximant is calculated by computing weights as the minimizer of the linearized approximation error
(6) 
Here, is a Löwner matrix of size defined as
(7) 
where . The minimization problem (6) can be solved exactly by SVD at a cost of flops. Since the matrix is only altered in one row and column after each iteration, updating strategies can be used to lower the computational cost of its SVD; see, e.g., [hochman2017fastaaa, lietaert2018automatic]. However, the numerical stability of such updating strategies is not guaranteed.
It is not difficult to extend AAA for the approximation of a vectorvalued function . Firstly, the vectorvalued version of (3) will map into since the are vectorvalued. However, the support points and weights remain scalars. The selection of the support points can still be done greedily by, at iteration , choosing a support point that maximizes on for some norm . In practice, the infinity or Euclidean norm usually work well but more care is sometimes needed when maps to different scales; see [lietaert2018automatic, NGT20]. Likewise, the weights are computed from a blockLöwner matrix where each in (7) is now a column vector of length composed with . The matrix is now of size , increasing the cost of its SVD to flops. When computing these SVDs for degrees as is required by AAA, the cumulative cost is . For large , this becomes prohibitive.
As explained in the introduction, a way to lower the computational cost for s vectorvalued function is to work with a scalar surrogate function that hopefully shares the same region of analyticity as . In [EG19] this function was chosen with a tensorized probing vector:
(8) 
The reason for this construction with a tensor product is that [EG19] focused on nonlinear eigenvalue problems where with . This allows for an efficient evaluation , which becomes particularly advantageous when fast matrixvector products with are available. In the case that is in the split form (2), only a matrixvector product with each is needed. A similar surrogate can be obtained for general by using a full (nontensorized) probing vector:
(9) 
For both surrogate constructions, we apply AAA to or and use the computed support points and weights to define as in (3), replacing the scalar function values by the vectors . Since the surrogate functions are are scalarvalued, the computational burden is clearly much lower than applying the setvalued AAA method to the vectorvalued function .
While computationally very attractive, the approach of building a scalar surrogate does unfortunately not always result in very accurate approximants. To illustrate, we consider the matrixvalued function of the buckling_plate example from [higham2019updated]. In Figure 1 the errors for both surrogates (8) and (9) are shown in blue. While AAA fully resolves each surrogate to an absolute accuracy well below , the absolute errors of the corresponding matrixvalued approximants to stagnate around .
An important observation put forward in this paper is that by taking multiple random probing vectors (and thereby a vectorvalued surrogate function ), the approximation error obtained with the setvalued AAA method can be improved, sometimes dramatically. In particular, we consider
(10) 
and the tensorized version
(11) 
Both probing variants can be sped up if (and hence ) is available in the split form (2) by precomputing the products of the random vectors with the matrices . The tensorized variant is computationally attractive when matrix vector products with can be computed efficiently since is the vectorization of . In both cases we obtain a vectorvalued function with components to which the setvalued AAA method can readily be applied. The resulting algorithm sketchAAA is summarized in Algorithm 1.
Sometimes using as little as probing vectors leads to very satisfactory results. This is indeed the case for our example in Figure 1 where the approximation errors of both surrogates (10) and (11) are shown in orange. Both approximants converge rapidly to an absolute error below . Remarkably, the approximation error of the surrogates is essentially identical to the error of setvalued AAA approximant applied to the original function , which maps to ; see Figure 1 (right).
Since the surrogates are random, the resulting rational approximants are also random. Fortunately, their approximation errors concentrate quickly. This is clearly visible in Figure 2 for the bucking_plate and nep2 examples from the NLEVP collection, showing the the 50percentiles which are less than wide. As a result, the random construction of the surrogates yields rational approximants that, with high probability, all have very similar approximation errors. Figure 2 also demonstrates that two probing vectors do not always suffice. For the nep2 problem, probing vectors are required for a relative approximation error close to machine precision.
Many more such examples are discussed in section 4 and in almost all cases a relatively small number of probing vectors suffices to obtain accurate approximations. The next section provides some theoretical insights into this.
3 Analysis of random sketching for function approximation
In this section we analyze the effect of random sketching on the approximation error. More precisely, we show that for a given fixed approximation of the corresponding approximation of the sketch enjoys, with high probability, similar accuracy. Let us emphasize that this setting does not fully capture Algorithm 1 because the approximation constructed by the algorithm depends on the random matrix in a highly nontrivial way. Nevertheless, we feel that the analysis explains the behavior of the algorithm for increasing
and, more concretely, it allows us to cheaply and safely estimate the full error a posteriori via a surrogate (with an independent sketch).
3.1 Preliminaries
Let us consider a vectorvalued function on some domain and with . Assuming we can equivalently view as an element of the tensor product space that induces the linear operator , . Note that is a Hilbert–Schmidt operator of rank at most . Applying the Schmidt decomposition [Hackbusch2019, Theorem 4.137] implies the following result.
Theorem 3.1.
We note that the singular values are uniquely defined by . By Theorem 3.1, the norm of on satisfies
where denotes the Euclidean norm of a vector. Extending the corresponding notion for matrices, the stable rank of is defined as and satisfies .
Our analysis applies to an abstract approximation operator defined on some subspace . We assume that commutes with linear maps, that is,
(12) 
This property holds for any approximation of the form
for fixed and , with the tacit assumption that functions in allow for point evaluations. In particular, this includes polynomial and rational interpolation provided that the interpolation points and poles are considered fixed. The relation (12) implies for , . With a slight abuse of notation we will simply write .
3.2 Full (nontensorized) probing vectors
The following theorem treats surrogates of the form (10). It constitutes an extension of existing results [gratton2018improved, MR1337645] on smallsample matrix norm estimation.
Theorem 3.2.
With the notation introduced in section 3.1, let denote the stable rank of for and let be a real (for ) or complex (for ) Gaussian random matrix. Set with for and for . Then for any we have
(13) 
where denotes the lower incomplete gamma function and
(14) 
Proof.
Applying the Schmidt decomposition from Theorem 3.1 to and using (12) yields
(15) 
By the orthonormality of and , it follows that are mutually independent chisquared variables with degrees of freedom (). Following wellestablished arguments [gratton2018improved, RoostaK], we obtain
which proves (13).
To provide some intuition on the implications of Theorem 3.2 for the complex case (), let us first note that
Setting , this shows that the failure probability in (13) is asymptotically proportional to and in turn, increasing will drastically reduce the failure probability provided that . Specifically, for , , we obtain a failure probability of at most for and for . This means that if Algorithm 1 returns an approximant that features a small error for a surrogate with components, then the probability that the approximation error for the original function is more than ten times larger is below . The probability that the error is more than hundred times larger is below . On the other hand, if there exists a good approximant for then (14) shows that it is almost guaranteed that the surrogate function admits a nearly equally good approximant (which is hopefully found by the AAA algorithm). For the setting above, the probability that the error of the surrogate approximant is more than times larger than that of the approximant for the original function is less than .
Remark 3.3.
A large stable rank would lead to a less favourable bound (13), but there is good reason to believe that the stable rank of remains modest in situations of interest. The algorithms discussed in this work are most meaningful when admits good rational approximants. More concretely, this occurs when decreases rapidly as increases, where is a rational function of degree . In fact, decreases exponentially fast when is analytic in an open neighborhood of the target set (this is even true when the infimum is taken over the smaller set of polynomials). As the singular value of is bounded by this implies a rapid decay of the singular values and hence a small stable rank of and, likewise, of .
3.3 Tensorized probing vectors
The analysis of the surrogate (11) with tensorized probing vectors is significantly more complicated because tensorized random vectors fail to remain invariant under orthogonal transformations, an important ingredient in the proof of Theorem 3.2. As a consequence, the following partial results cover the case only. They are direct extensions of analogous results for matrix norm estimation [BujanovicKressner2021].
Theorem 3.4.
In the setting of Theorem 3.2 with , consider for real Gaussian random vectors , . Then for any we have
(16) 
and
(17) 
Proof.
Setting and applying (15) yields
The last expression has been analyzed in the proof of Theorem 2.2 in [BujanovicKressner2021], showing that it is bounded by the bound claimed in (16). Similarly, it follows from the proof of Theorem 2.4 in [BujanovicKressner2021] that the quantity
is bounded by the bound claimed in (17). ∎
4 Applications and numerical experiments
Algorithm 1 was implemented in MATLAB. The setvalued AAA (SVAAA) we compare to (and also needed in Step 3 of Algorithm 1) is a modification of the original MATLAB code^{1}^{1}1 https://people.cs.kuleuven.be/karl.meerbergen/files/aaa/autoCORK.zip from [lietaert2018automatic]
. The SVAAA code implements an updating strategy for the singular value decomposition of the Löwner matrix defined in (
7) to avoid its full recomputation when the degree is increased from to . All default options in SVAAA are preserved except that the expansion points for AAA are greedily selected based on the maximum norm instead of the norm. In addition, the stopping condition is based on the relative maximum norm of the approximation that SVAAA builds over the whole sampling set . So, for example, if SVAAA is applied to the scalar functions with a stopping tolerance reltol, then the algorithm terminates when the computed rational approximations satisfyThe experiments were run on an Intel i712 700 with 64 GB RAM. The software to reproduce our experiments is publicly available.^{2}^{2}2 https://gitlab.unige.ch/Bart.Vandereycken/sketchAAA
4.1 NLEVP benchmark collection
We first test our approach for the nonpolynomial problems in the NLEVP benchmark collection [betcke2013nlevp, higham2019updated] considered in [NGT20]. Table 1 summarizes the key characteristics of these problems, including the matrix size and the number of terms in the split form (2). The target set is a disc or half disc specified in a meaningful way for each problem separately; see [NGT20, Table 3] for details. We follow the procedure from [NGT20] for generating the sampling points of the target set: 300 interior points are obtained by randomly perturbing a regular point grid covering a disc or half disc and another 100 points are placed uniformly on the boundary. This gives a total of 400 points for the set .
name  size  Name  size  

1  bent_beam 
6  16  13  pdde_symmetric 
81  3 
2  buckling_plate 
3  3  14  photonic_crystal 
288  3 
3  canyon_particle 
55  83  15  pillbox_small 
20  4 
4  clamped_beam_1d 
100  3  16  railtrack2_rep 
1410  3 
5  distributed_delay1 
3  4  17  railtrack_rep 
1005  3 
6  fiber 
2400  3  18  sandwich_beam 
168  3 
7  hadeler 
200  3  19  schrodinger_abc 
10  4 
8  loaded_string 
100  3  20  square_root 
20  2 
9  nep1 
2  2  21  time_delay 
3  3 
10  nep2 
3  10  22  time_delay2 
2  3 
11  nep3 
10  4  23  time_delay3 
10  3 
12  neuron_dde 
2  5  24  gun 
9956  4 
4.1.1 Small NLEVPs
We first focus on the small problems from the NLEVP collection, that is, the matrix size is below (larger problems are considered separately in the following section). Tables 2 and 3 summarize the results with a stopping tolerance reltol of or , respectively. For each problem we show both the degree and the attained relative uniform approximation error
of four algorithmic variants:
 SVAAA

refers to the setvalued AAA algorithm applied to the scalar functions in the split form (2) of each problem.
 SVAAA

refers to the setvalued AAA algorithm applied to all entries of the matrix , which is only practical for relatively small problems.
 sketchAAA

is used with or and full (nontensorized) probing vectors. The reported degrees and errors are averaged over 10 random realizations of probing vectors.
We find that in all considered cases, the error achieved by sketchAAA with probing vectors is very close to the stopping tolerance. This is achieved without ever needing a degree significantly higher than that required by SVAAA and SVAAA ; an important practical consideration for solving NLEVPs (see section 4.2.1 below). No timings are reported here because all algorithms return their approximations within a few milliseconds.
SVAAA  SVAAA  sketchAAA  sketchAAA  

degree  relerr  degree  relerr  degree  relerr  degree  relerr  
1  9  4.5e10  8  2.9e09  5.7  1.3e04  8  1.8e08 
2  25  3.4e09  25  3.4e09  22.9  1.3e05  25  3.5e09 
3  16  7.9e11  14  1.6e09  9.7  1.3e05  12.3  7.2e08 
4  12  8.7e09  12  6.6e09  12  6.0e06  12.2  6.4e09 
5  8  1.8e10  8  9.5e11  7  5.5e05  7.8  2.3e09 
7  4  7.4e09  4  7.1e09  3.8  8.6e07  4.2  2.3e08 
8  3  3.2e16  3  3.0e16  3  5.8e14  3  2.4e15 
9  22  9.4e09  21  7.7e09  21  7.7e09  21  7.7e09 
10  15  1.7e09  13  7.1e09  10.7  2.0e04  13  2.4e08 
11  10  1.4e09  9  3.8e09  9  3.7e06  9  8.4e09 
12  15  9.8e09  14  5.5e09  14  2.6e07  14  6.2e09 
13  9  2.9e09  9  3.1e09  8.2  2.5e05  9  4.0e09 
14  7  5.7e11  6  3.8e11  6.3  1.4e08  5.3  9.1e09 
15  9  7.0e11  7  3.8e09  5.9  3.4e05  7  3.8e09 
18  12  1.1e14  6  1.6e09  5.8  6.9e08  5.7  4.1e09 
19  13  1.2e09  12  4.3e09  11  1.0e05  12  6.5e09 
20  11  2.2e09  11  2.8e09  10.8  2.9e08  10.9  6.2e09 
21  14  6.3e09  14  1.3e09  14  1.5e09  14  1.3e09 
22  14  5.9e09  14  4.7e09  14  1.1e08  14  4.6e09 
23  18  6.1e09  14  5.0e09  14  1.0e08  14  4.4e09 
SV AAA  SV AAA  sketchAAA  sketchAAA  

degree  relerr  degree  relerr  degree  relerr  degree  relerr  
1  12  4.9e14  11  4.8e13  7.8  1.7e06  10.8  1.8e11 
2  30  5.6e13  30  5.6e13  26.8  2.2e07  30  8.3e13 
3  23  1.2e14  21  2.4e13  15.3  1.1e07  19.4  7.2e12 
4  15  4.9e13  15  5.9e13  15  3.6e08  15  5.8e13 
5  10  3.8e15  9  4.0e13  9  2.9e07  9  5.4e13 
7  10  3.0e13  10  3.3e13  9  1.0e08  10  4.9e13 
8  3  3.2e16  3  3.0e16  3  5.8e14  3  2.4e15 
9  27  3.5e13  27  2.9e13  27  3.0e13  27  3.0e13 
10  18  9.5e14  17  1.5e13  12.6  2.1e05  17  1.3e12 
11  13  9.2e14  12  8.6e14  11  3.5e08  12  1.2e13 
12  19  8.0e14  18  6.6e14  17  6.5e09  18  7.7e14 
13  11  7.2e13  11  6.6e13  10.6  1.9e07  11  5.7e13 
14  8  9.4e16  7  6.7e16  7.8  8.8e12  7.3  2.4e15 
15  13  4.6e15  11  2.0e13  9.3  6.6e08  11  1.7e13 
18  16  7.6e16  11  2.1e13  10.8  6.0e10  10.6  5.1e13 
19  15  2.8e13  15  5.3e14  13  3.2e07  14.8  8.9e13 
20  15  3.7e13  15  5.5e13  14.9  3.1e12  15.1  8.8e13 
21  18  8.0e14  18  4.6e14  17  7.9e12  17.8  2.4e13 
22  18  8.0e14  18  8.6e14  17  4.3e10  18  7.7e14 
23  23  1.6e13  18  2.3e13  17  2.5e10  18  2.6e13 
4.1.2 Large NLEVPs
This section considers the large problems from the NLEVP collection listed in Table 1, all with problem sizes above 1000. More precisely, we consider problem 6 (size 2400), 16 (size 1410), 17 (size 1005), and 24 (size 9956). For such problem sizes, the application of setvalued AAA to each component of is no longer feasible and hence this algorithm is not reported. In order to simulate a truly blackbox sampling of these eigenvalue problems when using full (nontensorized) probing vectors as in (10), we use the MATLAB function shown in Algorithm 2. This function obtains the samples without forming the large Gaussian random matrix explicitly. Instead, the sparsity pattern of is inferred onthefly as the sampling proceeds. In the case of tensorized probing as in (11), we can exploit sparsity more easily by computing , followed by the computation of where is the th column of .
The execution times are now more significant and reported in Tables 4 and 5, together with the required degree and the achieved relative approximation error. Table 5 shows that tensorized sketches lead to a faster algorithm with similar accuracy compared to the full sketches reported in Table 4. Like for the small NLEVP problems in the previous section, we find that sketchAAA with probing vectors is reliable and yields an approximation error close to the stopping tolerance reltol.
We note that these four problems are also available in split form (2) and both the nontensorized and tensorized probing can be further sped up (sometimes significantly) by precomputing the products of the random vectors with the matrices . This is particularly the case for the gun problem number 24 for which sketchAAA spends most of its time on the evaluation of at the support points . Exploiting the split form reduces this time drastically, which can be seen from the rows in Tables 4 and 5 labelled with “24*”. The case is particularly interesting as, coincidentally, the problem also has terms and, in turn, the setvalued approximations for SVAAA and sketchAAA both involve four functions. For tensorized sketching (Table 5), sketchAAA is faster than SVAAA while returning an accurate approximation of lower degree. This nicely demonstrates that our approach to exploiting the split form is genuinely different from the setvalued AAA algorithm in [lietaert2018automatic]: sketchAAA takes the contributions of the coefficient matrices into account while the setvalued AAA algorithm only approximates the scalar functions and is blind to . The following section illustrates this further.
reltol  SVAAA  sketchAAA  sketchAAA  

degree  error  degree  error  degree  error  
time (s.)  time (s.)  time (s.)  
1e08  6  14  9.0e11  8.8  2.7e05  9.8  2.7e07 
0.014  0.039  0.045  
16  3  5.5e16  3  6.4e14  3.7  1.4e14  
0.006  0.280  0.299  
17  3  3.8e16  3.9  3.0e13  3.7  2.1e14  
0.003  0.461  0.482  
24  11  2.9e11  7.3  1.5e05  8.2  1.9e08  
0.016  1.018  1.097  
24*  — ” —  0.019  0.024  
1e12  6  19  1.5e14  14.3  3.6e07  14.9  2.5e11 
0.006  0.041  0.045  
16  3  5.5e16  3  6.4e14  3.7  1.4e14  
0.007  0.281  0.297  
17  3  3.8e16  3.9  3.0e13  3.7  2.1e14  
0.002  0.458  0.481  
24  15  3.5e15  11  8.0e08  12.8  2.6e12  
0.020  1.024  1.100  
24*  — ” —  0.022  0.030 
reltol  SVAAA  sketchAAA  sketchAAA  

degree  error  degree  error  degree  error  
time (s.)  time (s.)  time (s.)  
1e08  6  14  9.0e11  9.4  2.0e05  9.8  2.9e07 
0.014  0.016  0.027  
16  3  5.5e16  3.1  2.6e14  3.9  1.5e15  
0.006  0.076  0.106  
17  3  3.8e16  3.6  1.2e14  3.5  3.1e15  
0.003  0.098  0.135  
24  11  2.9e11  6.9  1.4e05  8.1  2.0e08  
0.016  0.473  0.679  
24*  — ” —  0.009  0.011  
1e12  6  19  1.5e14  14.1  3.9e07  15  2.6e11 
0.007  0.017  0.028  
16  3  5.5e16  3.1  2.6e14  3.9  1.5e15  
0.007  0.076  0.108  
17  3  3.8e16  3.6  1.2e14  3.5  3.1e15  
0.002  0.097  0.133  
24  15  3.5e15  10.4  3.5e07  13  4.1e13  
0.020  0.477  0.673  
24*  — ” —  0.013  0.018 
4.2 An artificial example on the split form
The difference in the approximation degrees returned by SVAAA and sketchAAA becomes particularly pronounced when cancellations occur between the different terms of the split form or when the coefficients are of significantly different scales. To demonstrate this effect by an extreme example, consider
where and are random matrices of unit spectral norm. For the split form, we simply take the functions and . The sampling set contains 100 equidistant points in . Since SVAAA scales the functions to have unit norm on , the results would remain the same if we took and . In Table 6, we clearly see that SVAAA overestimates the degree since it puts too much emphasis on resolving .
reltol  SVAAA  SVAAA  sketchAAA  

degree  error  degree  error  degree  error  
1e08  24  1.1e09  8  4.1e09  8  4.1e09 
1e12  29  5.0e13  18  2.5e13  18  2.9e13 
4.2.1 Impact on numerical solution of NLEVP
As explained in [GT17, Section 6] and further analyzed in [NGT20, Section 2], an accurate uniform rational approximation on the target set is crucial for a robust linearizationbased NLEVP solver. Once the rational approximant is obtained, it can be linearized in various ways; see, e.g., [lietaert2018automatic, EG19, NGT20]. Specifically, Theorem 3 in [EG19] derives a (strong) linearization with of a rational matrix function in barycentric form (4), as returned by sketchAAA . Eigenvalues of in the target set can then be computed from the linearization, e.g., iteratively by applying a rational Krylov subspace method to . This in turn yields approximations to eigenvalues of . As the size of and, in turn, the cost of this approach increases linearly with , there is clearly an advantage gained from the fact that sketchAAA often yields a rational approximation of smaller degree compared to SVAAA.
4.3 Scattering problem
We apply sketchAAA to the Helmholtz equation with absorbing boundary conditions describing a scattering problem on the unit disc; see [Pradoverathesis, Sec. 5.5.4]. The vectorvalued function containing the solution for a wavenumber is no longer given in split form and depends rationally on :
Here, the stiffness matrix , damping matrix , and mass matrix are real nonsymmetric sparse matrices of size 20054, obtained from a finite element discretization. The (complex) entries of contain the nodal values of the finite element solution. The Euclidean norm of for is depicted in Figure 3. Although there are no poles on the real axis, some are quite close to it, resulting in large peaks in . We therefore expect that a rather large degree for the rational approximant of will be needed.
The computational results of sketchAAA applied to are reported in Table 7. The set contains 400 equidistant points in . We observe that a large degree is indeed needed to get high accuracy. While the standard value of is performing decently, a larger sketch size is needed so that the error of the approximant is comparable to that of the surrogate. This behavior is reflected in our analysis in section 3.2: according to Remark 3.3, slower convergence of rational approximations signals larger stable rank, which in turn leads to less favorable probabilistic bounds in Theorem 3.2 that are compensated by increasing . However, let us stress that even for the rational approximation can be computed very quickly and it is still more than times faster than applying SVAAA without sketching.
The timings in Table 7 do not include the evaluation of and the error on the sampling set , which is needed for all methods regardless of sketching. Since a large linear system has to be solved for each , evaluating is expensive. One of the benefits of rational approximation is that can be evaluated much faster: the most accurate in Table 7 can be evaluated in less than 0.002 seconds, whereas evaluating requires 0.2 seconds.
reltol  time (s.)  degree  error  

1e08  21.70  36.0  2.2e09  
1  0.017  29.8  8.5e06  
4  0.016  34.9  6.4e08  
8  0.020  35.2  3.7e08  
16  0.029  35.9  6.1e09  
24  0.039  35.9  7.2e09  
1e12  28.479  43.0  9.6e13  
1  0.012  34.6  1.1e07  
4  0.019  41.1  6.6e11  
8  0.026  42.8  3.6e12  
16  0.038  43.0  1.8e12  
24  0.049  43.1  1.2e12 
4.4 Boundary element matrices
Our last example is a nonlinear eigenvalue problem that arises from the boundary integral formulation of an elliptic PDE eigenvalue problem [Steinbach2009]. More specifically, we consider the 2D Laplace eigenvalue problem on the Fichera corner from [Effenberger2012a, Section 4]. Applying the boundary element method (BEM) to this problem results in a matrixvalued function that is dense and not available in split form. Also, the entries of are expensive to evaluate. Note, however, that hierarchical matrix techniques could be used to significantly accelerate assembly, resulting in a representation of that allows for fast matrixvector products [SauterSchwab]. Usually, the smallest (real) eigenvalues of are of interest. As the smallest eigenvalue is roughly , we consider in the domain , which is discretized by 200 equidistant points.
The number of boundary elements determines the size of the matrix . We present two sets of numerical experiments depending on whether we can store the evaluations for all sampling points on our machine with 64 GB RAM.
Storage possible
The largest problem size that allows for storing all necessary evaluations of is . The computational results are depicted in Table 8. Like for the scattering problem, we see that larger sketch sizes are needed but sketchAAA remains very fast and accurate. For example, with and for a stopping tolerance of 1e08, sketchAAA is about 220 times faster than setvalued AAA applied to and achieves comparable accuracy. For a stopping tolerance of 1e12, sketchAAA is about 350 times faster than setvalued AAA applied to .
In all cases of Table 8, we excluded the 3.2 seconds it took to evaluate for all . Both the setvalued AAA and sketchAAA method require these evaluations.
reltol  time (s.)  degree  error  

1e08  15.147  11.0  1.4e09  
1  0.023  7.4  1.9e04  
4  0.030  10.0  1.3e07  
8  0.045  10.3  2.6e08  
16  0.067  10.8  5.2e09  
1e12  21.737  14.0  2.0e13  
1  0.023  9.1  2.1e05  
4  0.034  12.9  5.7e11  
8  0.046  13.3  7.1e12  
16  0.061  14.0  2.6e13 
Storage not possible
For larger problems, is evaluated when needed^{3}^{3}3A considerable part of the cost in assembling the BEM matrix can be amortized when evaluating a few at the same time. We therefore evaluate and store in 20 values at once and perform where possible all computations on before moving on to the next set. but never stored for all . In Table 9 we list the results for tolerance . The degree and time for dense and tensor sketches are very similar and we therefore only show the results for tensor sketches. The errors for both the full and tensorized sketches are shown, even though they are also similar.
We observe that the runtime of the whole algorithm is considerably higher. As expected it grows with . The degree and the error, on the other hand, remain very similar to those for the small problem and the main conclusion remains: for the sketchAAA approximation is not accurate, but already for we obtain satisfactory results at the expense of only slightly increasing the runtime. In addition, even taking a large number of sketches is computationally feasible whereas applying SVAAA to the original problem is far beyond what is possible on a normal desktop.
size  time (s.)  degree  error  error  

384  1  26.134  9.4  1.2e05  2.1e05 
4  30.490  12.9  4.1e11  5.7e11  
8  31.132  13.4  8.7e12  6.8e12  
16  31.968  14.0  2.8e13  2.6e13  
24  32.017  14.0  2.2e13  2.2e13  
864  1  130.172  9.1  7.6e06  1.4e05 
4  154.802  13.0  7.6e11  5.0e11  
8  159.993  13.8  6.6e12  1.3e11  
16  161.689  14.0  9.0e13  7.5e13  
24  161.614  14.0  6.9e13  4.9e13  
1536  1  412.007  9.0  1.4e05  8.6e06 
4  489.553  12.9  5.5e10  7.8e11  
8  507.866  13.8  1.2e11  1.5e11  
16  512.706  14.0  1.4e12  1.5e12  
24  512.984  14.0  9.3e13  1.2e12  
2400  1  1010.040  9.1  6.6e06  1.5e05 
4  1194.473  12.9  2.4e10  1.3e10  
8  1244.731  13.9  6.7e12  1.2e11  
16  1250.988  14.0  1.5e12  1.8e12  
24  1252.066  14.0  1.5e12  1.4e12 
5 Conclusions
We have presented and analyzed a new randomized sketching approach which allows for the fast and efficient rational approximation of largescale vector and matrixvalued functions. Compared to the original surrogateAAA approach in [EG19], our method sketchAAA reliably achieves high approximation accuracy by using multiple (tensorized or nontensorized) probing vectors. We have demonstrated the method’s performance on a number of nonlinear functions arising in several applications. Compared to the setvalued AAA method from [lietaert2018automatic], our method works efficiently in the case when the split form of the function to approximate has a large number of terms, and even when the problem is only accessible via function evaluations. We believe that sketchAAA is the first rational approximation method that combines these advantages.
While our focus was on AAA and NLEVPs, let us highlight once more that our sketching approach is not limited to such settings. In principle, any linear (e.g., polynomial or fixeddenominator rational) approximation scheme applied to a largescale vectorvalued function can be accelerated by this approach. For example, current efforts are underway to develop a sketched RKFIT method [berljafa2017rkfit] as a replacement of the surrogateAAA eigensolver in the MATLAB Rational Krylov Toolbox (the latter of which is currently using only sketching vector).
We believe that there are a number of interesting research directions arising from this work. This includes a possible extension to the multivariate case. A multivariate pAAA algorithm has recently been proposed in [rodriguez2020p] but it is not immediately clear whether the sketching idea pursued here can be extended to this algorithm. Another potential improvement of sketchAAA in the case of many sampling points is to replace the SVDs for the leastsquares problems (6) by another sketchingbased least squares solver such as [rokhlin2008fast], similarly to what has been done in [nakatsukasa2022fast]. Finally, we hope that the analysis provided in this paper might shed some more light onto the accuracy of contour integralbased solvers for linear eigenvalue problems . These methods can be viewed as pole finders for the resolvent after random tensorized probing of the form .
Acknowledgments
We thank Davide Pradovera for providing us with the code for the scattering problem considered in section 4.3.