DeepAI

# Rational approximation of the absolute value function from measurements: a numerical study of recent methods

In this work, we propose an extensive numerical study on approximating the absolute value function. The methods presented in this paper compute approximants in the form of rational functions and have been proposed relatively recently, e.g., the Loewner framework and the AAA algorithm. Moreover, these methods are based on data, i.e., measurements of the original function (hence data-driven nature). We compare numerical results for these two methods with those of a recently-proposed best rational approximation method (the minimax algorithm) and with various classical bounds from the previous century. Finally, we propose an iterative extension of the Loewner framework that can be used to increase the approximation quality of the rational approximant.

• 15 publications
• 6 publications
08/20/2021

### Flexible rational approximation for matrix functions

A rational approximation is a powerful method for estimating functions u...
01/12/2023

### Practical challenges in data-driven interpolation: dealing with noise, enforcing stability, and computing realizations

In this contribution, we propose a detailed study of interpolation-based...
02/06/2023

### Resolution of singularities by rational functions

Results on the rational approximation of functions containing singularit...
01/14/2023

### Nonlinear approximation of functions based on non-negative least squares solver

In computational practice, most attention is paid to rational approximat...
05/16/2021

### Data-driven Algorithms for signal processing with rational functions

Rational approximation schemes for reconstructing signals from samples w...
08/16/2019

### An algorithm for real and complex rational minimax approximation

Rational minimax approximation of real functions on real intervals is an...
08/08/2022

### Numerical Evaluation of Mittag-Leffler Functions

The Mittag-Leffler function is computed via a quadrature approximation o...

## 1 Introduction

Approximation theory is a well-established field of mathematics that splits in a wide variety of subfields and has a multitude of applications in many areas of applied sciences. Some examples of the latter include computational fluid dynamics, solution and control of partial differential equations, data compression, electronic structure calculation, systems and control theory (model order reduction reduction of dynamical systems).

In this work we are concerned with rational approximation, i.e, approximation of given functions by means of rational functions. More precisely, the original to be approximated function is the absolute value function (known also as the modulus function). Moreover, the methods under consideration do not necessarily require access to the exact closed-form of the original function, but only to evaluations of it on a particular domain. Hence, these methods are data-driven.

Since rational functions can be written as a ratio of two polynomials (in the numerator and in the denominator), they prove to be more versatile than simple polynomials. Consequently, rational functions can approximate functions with singularities and with oscillatory behavior better than polynomials can. Furthermore, rational functions are extensively used in systems and control theory since the input-output behavior of linear dynamical systems is described in the frequency domain by the so-called transfer function (which is a rational function). Finally, the location of the poles of transfer functions determines important system properties such as asymptotic stability, transient behavior or damping.

For more details of the many aspects of approximation theory, we refer the reader to the classical books in [14, 30], and to the more recent books [36, 21] that put also emphasis on practical issues such as software implementation of the methods and applications, i.e., on data analysis for the latter.

In this work we study two recent data-driven rational approximations methods: the Loewner framework introdued in [24], and the AAA algorithm introduced in [26]. The main contribution of this paper is to present an extensive numerical study of the performance of these two methods for approximating the absolute value function. We put an emphasis on the Loewner framework and show that this method yields, in some cases, very promising results (e.g., maximum approximation errors lower than ). Moreover, we compare the numerical results with those of best rational approximants (computed by means of a recent numerical tool proposed in [16]). A multitude of bounds that were proposed in the last century are also included in the study. Finally, another main contribution of this work is to propose a novel Loewner iterative procedure that is able to further improve the approximation quality, and in some cases, to come near to the error provided by best rational approximants.

This paper is organized as follows; after a brief introduction is given in Section 1, we continue with a historical account of approximating the absolute value function provided in Section 2. Afterwards, in Section 3, we introduce the main rational approximation methods (the Loewner framework, the AAA and minimax algorithms) and we explain how the maximum approximation error is computed. Then, in Section 4, we provide a thorough and extensive numerical study of the methods. In Section 5 the new Loewner iterative procedure is introduced (and also some numerical tests are provided), while in Section 6 the conclusion follows.

## 2 Approximation of the absolute value function - a brief historical account

The absolute value function is a continuous function, non-differentiable at . Approximating this function by polynomials played a significant role in the early development of approximation theory. In 1908, the Belgian mathematician de la Vallée-Poussin raised the question of finding the best approximation of the absolute value function on the interval . This problem attracted the attention of several leading mathematicians of that period.

Approximating the function by means of polynomials or rational functions was studied intensively starting with the beginning of the 20th century. In what follows we present a short historical account of some of the main breakthrough results of quantifying the approximation error, either by means of polynomials or by rational functions. We follow the information provided in the survey papers [31, 39] and in Chapter 4 of the book [17].

Denote the set of polynomials with real coefficients of degree at most n with , where

 Pn={p(x)∣∣p(x)∈Rn[x]}. (1)

and with the error of the best polynomial approximation from of the function on the interval . More precisely, write

 En(|x|;[−1,1]):=inf{∥x−p(x)∥L∞[−1,1] | p∈Pn}. (2)

For brevity purposes, we sometimes use notation to refer to the quantity introduced in (2).

One of the first significant results regarding the error for approximating the absolute value function by means of polynomials was presented in [12]. There, it was shown that there exists a real positive constant (the so-called Bernstein constant) in the interval so that the following holds

 β=limn→∞2nE2n(|x|;[−1,1]). (3)

Moreover, it was conjectured in [38] that , which is a function of , admits the following series expansion as

 2nE2n(|x|;[−1,1])≈β−K1n2+K2n4−K3n6+⋯,   Ki>0. (4)

Other contributions were made throughout the years, such as the ones in [11, 23]. The bottom line is that the error of the best polynomial approximation of the absolute value function is indeed (as increases, the error decreases linearly with ).

Consequently, the result presented by D. J. Newman in [27] was indeed surprising. More precisely, he proved that the error of the best uniform approximation using rational functions of the function in the interval is in . This meant that the speed of rational approximation is much faster than that of polynomial approximation.

Next, we introduce the following set of all real rational functions :

 Rm,n={p(x)q(x)∣∣p(x)∈Rm[x],  q(x)∈Rn[x]}. (5)

Introduce the minimum approximation error in the following way:

 Em,n(|x|;[−1,1]):=inf{∥x−r(x)∥L∞[−1,1] | r∈Rm,n}. (6)

The result proven in [27] can be formulated as follows: for all , the following holds

 12e−9√n⩽En,n(|x|;[−1,1])⩽3e−√n. (7)

For more details on this result, and on some extensions of it to approximating other important functions, we refer the reader to [37, 18] and to chapter 4 from the book [29].

Moreover, it was shown in [13] that the following inequality holds for all

 En,n(|x|;[−1,1])⩾e−π√n+1, (8)

and in [40] that there exist positive constants and so that the following holds for all values of

 M1⩽eπ√nEn,n(|x|;[−1,1])⩽M2. (9)

Finally, years later in the contributions [34, 35], the following result was proven:

 limn→∞eπ√nEn,n(|x|;[−1,1])=8. (10)

The existence of the limit in (10) (with exact value ) has been conjectured in [39] on the basis of extensive numerical calculations. The limit can be considered as the analogue of Bernstein’s constant in (3) for the case of the best polynomial approximation of on . To the best of our knowledge, the formula in (10) represents the tightest possible attainable result.

### 2.1 Newman’s rational approximant

In his groundbreaking paper from 1964 [27], Newman not only provided bounds for the best rational approximation of but also showed how to compute a rational approximant explicitly.

Let be a positive integer and let and . Introduce the polynomial and the rational function as

 p(x)=(x+α)(x+α2)⋯(x+αn−1),   and  f(x)=p(−x)p(x). (11)

In [27] the following result was proven for all and

 |f(x)|=∣∣p(−x)p(x)∣∣≤e2(α−αn)ln(α). (12)

To prove this, one uses the fact that for and that .

Now, choosing it follows . Also note that .

Hence, we have and thus from (12), it follows and . Finally, the order Newton approximant is written as

 RNew(x)=xp(x)−p(−x)p(x)+p(−x)=x[∏n−1k=1(x+αk)−∏n−1k=1(−x+αk)]∏n−1k=1(x+αk)+∏n−1k=1(−x+αk). (13)

The resulting error on the positive interval written and satisfies the following inequality on :

 γ(x)=x−RNew(x)=2xp(−x)p(x)1+p(−x)p(x)=2xf(x)1+f(x)⩽2xf(x)1−f(x)⩽2f(x)1−f(x), (14)

and using that it follows that , and

 γ(x)<3e−√n. (15)

Involving some additional reasoning, one can extend the above upper bound as shown in [27], to the whole interval , and because of symmetry reasons, to as well.

###### Remark 2.1.

The Newman approximant in (13) is indeed interpolatory. It can thus be obtained using the Loewner framework in [24], with the following

associated interpolation pairs containing

interpolation points:

 {(0,0),  (−α,α),  (−α2,α2),  …,  (−αn−1,αn−1)}. (16)

## 3 The main approximation methods

### 3.1 Linear dynamical systems

A linear dynamical system is characterized by a set of differential equations provided below:

 Σ:  E˙x(t)=Ax(t)+Bu(t),    y(t)=Cx(t), (17)

where , . The dimension of system is denoted with and it represents the length of the internal variable . The control input is denoted with , while the observed output is . The matrix is typically considered to be invertible and hence assimilated into matrices and as follows: . By applying the Laplace transform directly to the differential equation in (17), one can write that

 sEX(s)=AX(s)+BU(s)⇒X(s)=(sE−A)−1BU(s)⇒Y(s)=C(sE−A)−1BU(s) (18)

Hence, the transfer function is the ratio of the output to the input :

 H(s)=Y(s)U(s)=C(sE−A)−1B. (19)

Model reduction generically refers to a class of methodologies that aim at, in the case of linear systems, constructing reduced-order linear systems of the form:

 ^Σ: ^E˙^x(t)=^A^x(t)+^Bu(t),  ^y(t)=^C^x(t)+^Du(t), (20)

where  ,  ,  . Such systems in (20) are used to replace those in (17) for numerically expensive tasks such as real-time simulation, optimal control, etc. The dimension of the reduced-order model is denoted with and it is typically much lower than that of system .

For linear dynamical systems such as the one in (17), the general idea is to start from data sets that contain frequency response measurements. The goal, as it will be made clear in the next section, is to find rational functions that (approximately) interpolate these measurements. More precisely, through a system theoretical viewpoint, we seek reduced-order linear dynamical system that models these measurements.

For more details on state-of-the-art MOR methods and an extensive historical account of the existing literature, we refer the reader to [6]. Extensions to nonlinear systems can be found [7], while extensions to parametric systems are presented in [10]. For a recent account of the developments of MOR (algorithms and theory), see [9]. The case of interpolation-based MOR methods is extensively and exhaustively treated in [2].

### 3.2 The Loewner framework

The Loewner framework (LF) is a data-driven model identification and reduction tool introduced in [24]. It provides a solution through rational approximation by means of interpolation. For more details, see the extensive tutorial paper [3], the Ph.D. dissertation in [20] that addresses connections of the LF to Lagrange rational interpolation, the Ph.D. dissertation in [19] that addresses extensions of the LF to some special classes of nonlinear dynamical systems, and the recent book chapter [22].

The set-up of the LF is as follows: we are given sampling points together with function evaluations at these points denoted with are given. The original set of measurement pairs is denoted with .

Assume that is an even positive integer and let . The given data is first partitioned into the following two disjoint subsets

 D=L∪R, where {right data:L={(λi; wi)|i=1,⋯,k},left data:  R={(μj; vj)|j=1,⋯,k}. (21)

The elements of the subsets and are as follows (for ):

1. , are the right, and, respectively left sample points.

2. , are the right, and, respectively left sample values.

The problem can be formulated as follows: find a rational function of order , such that the following interpolation conditions are (approximately) fulfilled:

 R(λi)=wi,   R(μj)=vj. (22)
###### Remark 3.1.

Note that in a typical data-driven MOR setup, it is considered that the sample values , represent evaluations of the transfer function (as defined in (19)) corresponding to an underlying linear dynamical model as in (17). More precisely, one would assume that and for all .

The next step is to arrange the data (21) into matrix format. The Loewner matrix and the shifted Loewner matrix are defined as follows

 L=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣v1−w1μ1−λ1⋯v1−wkμ1−λk⋮⋱⋮vk−w1μk−λ1⋯vk−wkμk−λk⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,  Ls=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣μ1v1−w1λ1μ1−λ1⋯μ1v1−wkλkμ1−λk⋮⋱⋮μkvk−w1λ1μk−λ1⋯μkvk−wkλkμk−λk⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, (23)

while the data vectors

are introduced as

 (24)

Additionally, introduce the diagonal matrices containing the sample points as

 (25)

and the vectors of ones .

The following relations between the Loewner and the shifted Loewner matrices were proven to hold (see for example in [24]):

 Ls=LΛ+VR,    Ls=ML+LW. (26)

It was also shown in [24] that the Loewner matrices satisfy the following Sylvester equations:

 ML−LΛ=VR−LW,   MLs−LsΛ=MVR−LWΛ. (27)

where .

###### Lemma 1.

If the matrix pencil is regular, then one can directly recover the system matrices and the interpolation function. The Loewner model is composed of

 E=−L,  A=−Ls,  B=V,  C=W,→H(s)=W(Ls−sL)−1V. (28)

Introduce as the generalized controllability matrix in terms of the right sampling points and matrices . Additionally, let be the generalized observability matrix, written in terms of the left sampling points and matrices .

 O=⎡⎢ ⎢⎣C(μ1E−A)−1⋮C(μkE−A)−1⎤⎥ ⎥⎦,  C=[(λ1E−A)−1B⋯(λkE−A)−1B]. (29)

Then, it follows that the data matrices can be factorized as follows

 L=−OEC,  Ls=−OAC, V=CC, W=OB. (30)

In practical applications, the pencil

is often singular. In these cases, perform a rank revealing singular value decomposition (SVD) of the Loewner matrix

. By setting , write (, and )

 L=XSY∗≈XrSrY∗r,   with  Xr,Yr∈Ck×r, Sr∈Cr×r. (31)
###### Lemma 2.

The system matrices corresponding to a compressed /projected Loewner model for which the transfer function approximately matches the conditions in (22), can be computed as

 ^ELoew=−X∗rLYr,  ^ALoew=−X∗rLsYr,  ^BLoew=X∗rV,  ^CLoew=WYr. (32)

The rational function corresponding to the reduced-order Loewner model can be computed in the following way

 RLoew(x)=^CLoew(x^ELoew−^ALoew)−1^BLoew=WYr(X∗r(Ls−xL)Yr)−1X∗rV. (33)
###### Remark 3.2.

Provided that the reduced-order matrix in (32) is invertible, one can incorporate it into the other system matrices. In this way, we put together an equivalent model in ”standard” representation:

 ~ALoew =^E−1Loew^ALoew=(X∗rLYr)−1(X∗rLsYr), ~BLoew =−^E−1Loew^BLoew=−(X∗rLYr)−1(X∗rV), ~CLoew=^CLoew=WYr.

#### 3.2.1 Extension to odd number of measurements

The Loewner framework does not necessarily impose a data set in (21) with an even number of measurements. Moreover, the Loewner framework was recently extended in [5] to cope with transfer functions of rectangular systems.

By assuming that , i.e., we have one extra measurement pair denoted with . Assume that this pair is to be included in the right data set in (21), i.e. let and . The new updated Loewner matrices , and can be written in terms of the original Loewner matrices in (23) as follows

 ˘L=[L  z1]∈Ck×(k+1),  z1∈Ck,  z1(j)=vj−wk+1μj−λk+1,˘Ls=[L  z2]∈Ck×(k+1),  z2∈Ck,  z2(j)=μjvj−λk+1wk+1μj−λk+1. (34)

Additionally, the following vectors are given as:

 (35)
###### Remark 3.3.

Note that the procedure discussed in this section will further be used in Section 4 which includes the numerical results. There, the origin point will be added to the set of points chosen inside the interval , where . The reason for adding is to decrease the overall approximation error.

#### 3.2.2 Extension to identical left and right data sets

The Loewner framework does not necessarily impose disjoint left and right data subsets in (21). As explained in the original paper [24], one can modify the classical Loewner framework by including information of the derivative corresponding to the underlying ”to-be-approximated” function .

Here we show how to treat the case of identical subsets, i.e., and . We adapt the definition of the Loewner matrices in (23) as was also proposed in [8] (for the TF-IRKA iterative procedure)

 (36)

###### Remark 3.4.

Note that the procedure discussed in this section will be used in Section 4 which includes the numerical results. There, the special case of data partitioning that will be referred to as ”Loewner same”, will be based on this procedure.

### 3.3 The AAA algorithm

The AAA (Adaptive Antoulas Anderson) algorithm proposed in [26] represents an adaptive extension of the method originally introduced by A.C. Antoulas and B.D.O. Anderson in [4].

The AAA algorithm is a data-driven method that requires as an input only evaluations of a univariate function on a set of points denoted with . Extensions of the algorithm to multivariate functions have been recently proposed in [33].

Assume as given sampling points together with function evaluations at these points denoted with , i.e. .

The AAA algorithm is based on an iteration. Assume we are at step . The order rational approximant that is constructed through the algorithm in [26] is first written in the following barycentric format:

 RAAA(x)=∑ℓk=0αkwkx−νk∑ℓk=0αkx−νk, (37)

where are the interpolation points (selected from the set ), are the function evaluations, and are the weights. The method enforces interpolation at the points , i.e. .

One needs to determine suitable weights by formulating the approximation problem as follows:

 minα1,…,αℓN∑i=1(RAAA(τi)−F(τi))2⇔minα1,…,αℓN−ℓ∑i=1(∑ℓk=0αkwkτi−νk∑ℓk=0αkτi−νk−fi)2⇔ minα1,…,αℓN−ℓ∑i=1(∑ℓk=0αk(wk−fi)τi−λk∑ℓk=0αkτi−νk)2. (38)

Instead of solving the more general problem above (which is nonlinear in variables ) one solves a relaxed problem by means of linearization. This procedure leads to a least squares minimization problem, i.e.,

 minα1,…,αℓN−ℓ∑i=1(ℓ∑k=0αk(fi−wk)τi−νk)2. (39)

Introduce the Loewner matrix as and . Additionally, let with . The minimization problem is written as

 minα∥Lα∥22. (40)

The solution is given by the th right singular vector of the matrix . The next interpolation point is selected by means of a Greedy approach, i.e., the location in where the error is maximal.

Assume that at the end of the iteration, the rational approximant is of order . Hence, one can also explicitly write this function as was done in the Loewner framework for the function in (33), i.e.,

 RAAA(x)=^CAAA(x^EAAA−^AAAA)−1^BAAA, (41)

where the matrices are of dimension and are given explicitly in terms of the selected support points , the function evaluations , and the weights .

 ^AAAA=diag([ν0,ν1,…,νr,1])−^BAAAeTr+2,  ^BAAA=[α0⋯αr1]T,^EAAA=diag([1,1,…,1,0]),   ^CAAA=[w0⋯wr0],   er+2=[11⋯1]T. (42)

Note that for the numerical examples presented in Section 4, we will be using the AAA implementation available in the Chebfun toolbox [15].

### 3.4 The minimax algorithm

Computing the best polynomial approximation is a known problem that was solved many decades ago by Evgeny Yakovlevich Remez who proposed an iterative algorithm (see [32]). A robust implementation of the linear version of the Remez algorithm was recently proposed in [28] (the code is available in the Chebfun toolbox [15]). Now switching to rational approximation, it is to be noted that there exist algorithms which address best rational approximation: the rational Remez algorithm and differential correction (DC) algorithm. In [16], a new robust/optimized algorithm is proposed (based on adaptive construction of rational approximants in barycentric representations).

The problem that is addressed in the contribution [16] is to approximate functions using type rational approximations with real coefficients from the set .

Given the function and integers , the problem is formulated as follows: find so that the infinity norm is minimized. It is known that such a minimizer exists and is unique (Chapter 24 in [36]).

Denote with the best (minimax) approximation of function (the solution to the above problem). It follows that there exists a sequence of ordered nodes where the function takes its global extreme value over the interval (with alternating signs). This sequence is denoted with with , hence

 f(xk)−¯¯¯r(xk)=(−1)k+1∥f−¯¯¯r∥∞,   k∈{0,1,…,m+n+1}. (43)

Computing rational minimax approximations is a challenging task especially when there are singularities on or near the interval of approximation. In the recent contribution [16], the authors propose different combinations of robust algorithms that are based on rational barycentric representations. One key feature is that the support points are chosen in an adaptive fashion as the approximant is computed (as for the AAA algorithm in [26]).

In the numerical examples presented in this contribution we will be using the Chebfun minimax implementation available in the toolbox [15]. As discussed in [16] there, the method is based on using an AAA-Lawson method of iteratively re-weighted least-squares followed by a classical Remez algorithm to enforce generically quadratic convergence.

In Fig. 1, we depict the magnitude of the approximation error for the best order (28,28) approximant (computed by means of the minimax algorithm in [16]).

Next, in Fig. 2, we depict the maximum approximation error in for the best order (n,n) approximant computed by means of the minimax algorithm in [16], where . Additionall, the bounds proposed by Newman, Bulanov and Stahl are shown on the same figure.

### 3.5 Computation of the maximum error

In this section we provide a brief description of the methodology used to compute the maximum error of a rational function when approximating on . Such function could be either computed with the Loewner framework introduced in Section 3.2, with the AAA algorithm in Section 3.3, or with the minimax algorithm in Section 3.4. For the later, the maximum error is computed to be the absolute value of the rational minimax function evaluated at .

Now, for the rational functions computed with the Loewner and AAA approaches, we consider to be a realization of the function . For the Loewner framework, these matrices can be computed as in (32), while for the AAA algorithm, the matrices in (42) are used instead.

Given a rational function , we compute the maximum approximation error . Introduce also and . Finally, for , let . Consequently, it follows that

 ϵR=max(ϵ−R,ϵ+R,ϵ−1R,ϵ1R,ϵ0R) (44)

The challenging part is computing and , since the other 3 quantities are simple evaluations of for . The main idea for computing and is as follows: determine all extrema of the function

by solving a generalized eigenvalue problem and select only the ones that are on the real axis in between

and .

We now go through the procedure for finding (finding can be derived i a similar manner). Finding all extrema of the function on is equivalent to finding the zeros of the rational function on . This can be formulated as finding the eigenvalues of the pair , where the corresponding matrices can be found as follows

 ¯¯¯¯E=⎡⎢ ⎢⎣^E^E0⎤⎥ ⎥⎦,  ¯¯¯¯¯A=[^A^E0^A],  ¯¯¯¯¯B=[0^B],  ¯¯¯¯¯C=[^C0], ¯¯¯¯¯D=1, ¯¯¯¯¯X=[¯¯¯¯¯A¯¯¯¯¯B¯¯¯¯¯C¯¯¯¯¯D]. (45)

Finally, from the computed eigenvalues, select the one in at which the evaluation of is maximal. It means that this is precisely the value we were looking for, i.e., . Now, in order to compute , we repeat the above procedure by choosing instead .

## 4 Numerical experiments

In this section we report on a variety of numerical experiments that were performed for approximating de absolute value function on the interval . Both the Loewner framework and the AAA algorithm are data-driven methods and hence they require measurements. We propose a couple of different types of sampling points and also different data partitioning techniques for the Loewner framework. Additionally, we provide comparisons to the classical rational approximation bounds derived in the previous century, which were also mentioned in Section 2.

The sampling points are selected to follow five different distributions:

 ∙{Linearly-spaced} \ (% linspace) \ points;∙{Logarithmically-spaced} \ % (logspace) \ points;∙{Chebyshev} \ points;∙%Zolotarev points;∙{Newman} \ points; (46)

In Fig. 3, we depict four categories of sampling points with 2 examples (linspace, logspace, Chebyshev and Zolotarev). In the first one (left figure), 4 curves are plotted. The x axis represents the sampling interval , while the y axis shows the number of points (in total 1024 points were chosen). In the second one (right figure), we depict 16 points for each of the four category with colorful symbols (the color of the curves coincide with those of the symbols and will remain consistent throughout the remaining of the manuscript).

Note that the partitioning of data into left and right data sets (see (21)) is the first step in the Loewner framework and a very crucial one indeed. In what follows, we use three data partition techniques for the Loewner framework:

 ∙{split} \ (disjoint \ % intervals, \ e.g., \ negative \ and \ positive);∙{alternating} \ (every \ second \ left \ and \ % every \ second \ right);∙{same} \ (the \ left \ and \ right \ points \ are % \ the \ same). (47)

In Fig. 4, we depict the three partitioning options with an example of 16 points linearly spaced points in the interval .

Let and and choose interpolation points inside . More precisely, are chosen inside the interval and the other half inside .

In what follows, we apply the Loewner framework for the five types of sample points mentioned in (46). For each of the different points selection, we apply three different partitioning schemes proposed in (47).

Note that the dimension of the Loewner matrices is and the Loewner rational approximants are of order . The AAA approximant is of order . We will choose the reduction order to be for the following experiments.

### 4.1 Linearly-spaced points

For the first case, we consider that the interpolation points are linearly spaced within the interval for and . Let an integer and define positive ”linspace” points as

 plin1=a,  plin2=a+b−an−1,  …  ,plinn=a+(n−1)(b−a)n−1=b. (48)

Note that these points span a subinterval of the interval . Additionally, for symmetry reasons, the negative points are also considered, i.e. .

In Fig. 5, the singular value decay of the Loewner matrices computed by means of the three proposed partition techniques in (47) (left sub-figure). Moreover, the original absolute value function together with the 3 rational approximants computed with the Loewner framework (of order (27,28)) as well as the one of order (28,28) computed by means of AAA, are depicted in the right sub-figure in Fig. 5.

In Fig. 6, we depict the approximation error produced by the four rational functions that are used to approximate . In the right sub-figure, the magnitude of the errors is shown, while in the left sub-figure all 4 error curves are depicted on the same plot. We observe that, in all four cases, there is a visible peak at 0.

For the next experiment, we include in the set of pairs of measurements denoted with . We apply again the Loewner framework for the three different partitioning options and compute the corresponding rational approximants (together with that computed using AAA). We depict the magnitude for all four error curves in Fig. 7.

Finally, by varying the reduction order from 6 to 40, we compute approximants by means of the four methods (note that 0 is again added here). We collect the maximum approximation errors for each method computed by means of the procedure presented in Section 5. Additionally, we compute the Newman, Bulanov and Stahl bounds presented in Section 2 (for each value). The results are depicted in Fig. 8.

Clearly, by adding the pair to the original set of measurement pairs of length will improve the accuracy of the approximation. For example, for the Loewner same method, the approximation error decreased from to .

Throughout the rest of the numerical experiments presented in this section, it will be assumed that 0 is included in the set of sample points.

### 4.2 Logspace points

In this section we consider that the interpolation points are logarithmically spaced within the interval for and . Let and . Let also an integer and define positive ”logspace” points