Relative-error stability of numerical algorithms

09/22/2021
by   Carlos Beltrán, et al.
Universidad de Cantabria
aalto
0

We formalize the definition of a stable algorithm that is (i) adapted to the use of multiple and variable precision arithmetic, (ii) sufficiently close to the actual practice of computing to be useful, and (iii) sufficiently robust from a mathematical point of view as to allow for the rigorous proof of theorems. This allows us to state some widely satisfied hypotheses, depending only on two functions f and g, under which the composition of a stable algorithm for f and a stable algorithm for g is a stable algorithm for the composition f ∘ g.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/31/2021

Composing Networks of Automated Market Makers

Automated market makers (AMMs) are automata that trade electronic assets...
06/18/2018

Numerical Evaluation of Elliptic Functions, Elliptic Integrals and Modular Forms

We describe algorithms to compute elliptic functions and their relatives...
03/09/2020

The open and clopen Ramsey theorems in the Weihrauch lattice

We investigate the uniform computational content of the open and clopen ...
09/28/2021

On the Geometry of Stable Steiner Tree Instances

In this note we consider the Steiner tree problem under Bilu-Linial stab...
12/14/2016

Stable Memory Allocation in the Hippocampus: Fundamental Limits and Neural Realization

It is believed that hippocampus functions as a memory allocator in brain...
11/26/2020

Being Central on the Cheap: Stability in Heterogeneous Multiagent Centrality Games

We study strategic network formation games in which agents attempt to fo...
01/04/2020

A stable SPH with adaptive B-spline kernel

Tensile instability, often observed in smoothed particle hydrodynamics (...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper stems from a misleadingly simple question: what does it mean that an algorithm for computing a mathematical function is stable?

Let us start with a concise—but necessarily incomplete—overview of the modern history of this question. Stored-program computers were developed conceptually from the 1930s [35] and physically from the 1940s. As computers became increasingly common tools for engineers, mathematicians, and scientists, the reliability of their approximate computations has been of great interest. Especially in numerical linear algebra, the two related concepts of stability of an algorithm and condition of a problem have been simultaneously used to investigate this topic. Both ideas arguably originated in the 1940s with the pioneering work of von Neumann and Goldstine [37] and then Turing [36] on linear systems of equations; Turing is credited for introducing the term “condition number.” The study of stability was further developed in the 1950-1960s by Wilkinson, in the form of backward error analysis; his books [38, 39] contain many of his findings. Later decades saw early statements of a formal definition of stability: in this sense, de Jong [15] and Bunch [11] paved the way for our study. In the 1990s, Higham published the first edition of his monograph [21], which is still regarded as a cornerstone reference on the stability of algorithms in linear algebra. Trefethen and Bau also address the topic in several chapters of their equally celebrated book [34]. A classic reference for the theory of condition is the 1966 paper by Rice [30], where a general mathematical framework was built up. A more geometric approach to condition numbers is summarized in the monograph [12] by Bürgisser and Cucker. This viewpoint arose in the context of the intricate link between complexity and computation. For earlier developments in this area, we refer to Blum, Cucker, Shub, and Smale [8]. Finally, we note that stochastic approaches to stability and condition have also been proposed: Stewart’s stochastic perturbation theory [32], smoothed analysis [31], smoothed condition numbers [2], or mixed stochastic analysis [24].

To illustrate some of these ideas in more detail, consider the problem studied by Turing [36]: given a square, nonsingular matrix

and a vector

, we want to solve for . Suppose that the coefficients of come from, say, experimental measurements so that their precision is limited. Assuming exactness of subsequent computations, we can compare the solution of obtained using the “measured” value of to the ideal solution associated with the “true” value of . How does the accuracy of the solution

depend on the errors made in estimating

? The question soon evolved into two distinct, yet related, concepts, discussed next.

Condition: The intrinsic hardness of problems

Here, one seeks to measure the first-order sensitivity of the solution with respect to variations in the input parameters. This is captured by the condition number of the problem. For example, the relative condition number of matrix inversion is given by Turing’s famous expression . It answers the question posed above by quantifying by how much a relative error in the coefficient matrix can amplify the relative error in the solution . If the solution to is computed with fixed precision, the above condition number must be compared with the precision. The larger their product, the fewer digits of the approximate solution one can trust, irrespective of the computational method. Although it often features in analyses of approximated computations, we stress that the condition number is a property of a problem that is independent of the choice or even the existence of an algorithm.

Stability: The reliability of an algorithm

Here, one looks for a measure of how much an algorithm can be trusted when executed in finite-precision arithmetic: even if some problem instance is well-behaved in the sense that its condition number is small, it can happen that an algorithm does not yield a satisfactory answer. This is captured by the more qualitative concept of stability. An algorithm is unstable if it can produce answers whose error is substantially in excess of what one expects from the problem’s condition number; on the other hand, it is stable if this is guaranteed not to happen. A classic example of an unstable algorithm for the overdetermined least-squares problem , where is a left-invertible rectangular matrix, consists of transforming it into the algebraically equivalent linear system of normal equations and solving it by Gaussian elimination with full pivoting [7]. While both of these algorithms are stable, their composition nevertheless yields a disappointingly unstable algorithm for solving the original problem. The culprit is the condition number of the Gram matrix which is the square of [21]. Fortunately, stable algorithms to solve

exist, e.g., one can compute a reduced singular value decomposition of

, solve the diagonal system , and recover .

While the formal definition of the condition number is well established [12, 30], the notion of numerical stability is subtler and contextually flexible. Modern-classic references such as [21, 34] consider three different definitions of increasing strength:

  1. A forward stable algorithm (also called weakly stable in [11]) gives an answer that is close to the actual exact answer for the given input. In the above example, a forward stable algorithm for finds , where is the exact solution. The approximate equality is allowed to hide a linear dependence on the condition number .

  2. A mixed forward-backward stable algorithm (sometimes called numerically stable [15, 21]) gives an almost exact answer to a nearby problem. In the example above, a mixed forward-backward stable algorithm would find , where is the exact solution to for some . Now the approximate equality is allowed to depend polynomially on the size of the matrix but, unlike forward stability, it is independent of .

  3. A backward stable algorithm (also called strongly stable in [11]) provides the exact answer to a nearby problem. In the example above, a backward stable algorithm would find such that for some . Here, the approximate equality is independent of .

These definitions supported the analysis of numerical algorithms for decades. Yet, when they were introduced, mathematical software was typically implemented in one fixed precision floating-point arithmetic, such as either double or single precision. This paradigm is now changing

[4, 6, 13, 17, 18, 22, 23, 1] as we currently witness an expanding use of both high-precision (for enhanced accuracy) and low-precision (for enhanced speed) arithmetic. Some recent algorithms even combine multiple precisions: the idea is to work with a lower precision for steps where rounding errors are not a concern and with a higher precision for steps where more care is necessary; see [1] for a survey of numerical algorithms in this setting. Others are implemented in variable precision: this flexibility is key to be able to adapt both to an accuracy required by the user and to the condition number of the problem instance.

The aforementioned evolution in algorithm design calls for an update of the concept of stability. Moreover, the flexibility in existing definitions of stability prevents us from rigorously proving results that are useful in practice. The main goal of this paper is filling these gaps. That is, we will

  1. formalize numerical stability of a numerical algorithm to approximate a mathematical function (treated in Section 4, 3 and 2);

  2. exploit this formalization to prove a composition theorem that gives a sufficient condition under which the composition of two stable algorithms yields another stable numerical algorithm (treated in Section 5); and

  3. illustrate how these definitions and techniques can be exploited to prove stability of elementary functions, prove instability of fast matrix multiplication, and indicate the limits of the composition theorem (treated in Section 8, 7 and 6).

Our approach builds on the classic notions of condition number and stability. The strongest notions of stability are too rigid for our goals. For example, as we discuss later, the composition of two backward stable algorithms is not always backward stable. We therefore adopt the weakest of the classic notions of stability, that is, forward stability. Although forward stability is sometimes overlooked as too weak, we argue that there are several reasons to give it more credit:

  1. It allows us to produce formal results concerning the stability of compositions of algorithms. We view this as a fundamental cornerstone for a mathematical theory of stability to be sufficiently powerful.

  2. The final users of numerical algorithms are often not numerical analysts, but rather scientists and engineers. Being fully based on forward rather than backward errors, forward stability is arguably easier to interpret for such non-experts. Forward stability analysis additionally exposes the hardness of the problem as measured by the condition number, while a backward error analysis may inadvertently give a non-expert the impression that the numerically computed outputs can be trusted regardless of the problem’s condition.

  3. As mentioned above, increasingly often algorithms are being implemented in either a multiple or variable precision setting [4, 6, 13, 17, 18, 22, 23]. Many of the variable precision algorithms precisely follow the philosophy of guaranteeing a user-specified upper bound on the forward error. A forward stable algorithm is useful for this goal even if it is not backward stable.

  4. When the goal is to prove instability, forward stability is actually the strongest concept, by contraposition.

The outline and main results of this paper are as follows. Section 2 recalls Rice’s condition number (Definition 1) and investigates it in Pryce’s coordinatewise relative error metric (Definition 2). The main result of this section is the characterization of the corresponding condition number in Proposition 1. The key insight of this paper is that properties involving this condition number can establish the stability of compositions of stable algorithms. For such a statement to be precise, however, we need to carefully consider what we mean by “an algorithm for a problem”. Therefore, Section 3 clarifies the model of numerical computations that we adopted. It is based on Cucker, Malajovich, and Shub’s approaches to approximate computations in the Blum–Shub–Smale (BSS) model, which we recall in Appendix B. Thereafter, in Section 4, the classic definitions of backward, mixed, and forward stability are formalized in this model in Definition 6, 5 and 4. The main innovations of this paper are presented in Section 5. First, we introduce the concept of amenable problems in Definition 7. This captures a wide class of problems that are well-behaved in their domain of definition and the growth of their condition number. Next, a compatibility condition is introduced in Definition 8, which ensures that a computational problem is decomposed into subproblems in such a way that it can lead to numerical stability. The main result, Theorem 1, on the forward stability of a composition of forward-stable algorithms is proven under the assumption of amenable and compatible problems. Note that we use problem-specific properties (condition) to establish algorithm-specific properties (stability). The helpful consequence is that every composition of forward-stable algorithms is forward stable if the amenability and compatibility of the corresponding problems holds. Finally, we investigate the string of implications from backward to mixed to forward stability, under the hypothesis of amenability in Theorem 2 and 3. All of the main results are employed in Section 6 in demonstrating several elementary amenable problems and forward stable algorithms, in the coordinatewise relative error metric, to solve them. Conversely, Section 7 demonstrates that Strassen’s algorithm for matrix multiplication is not forward stable in the coordinatewise relative error metric, even though it satisfies Brent’s well-known forward error bound. The surprising example of the sine function, a non-amenable function on that cannot be realized as a composition of well-behaved (i.e., compatible and amenable) functions, is featured in Section 8. The conclusions and outlook of our study are presented in Section 9.

2 The condition number in coordinatewise relative error

In this paper, we restrict ourselves to computational problems that can be modeled by a function from a subset of one real vector space to another. We start by recalling Rice’s definition of condition number [30] in this context, assuming to have fixed (possibly different) distance functions on the domain and codomain of .

Recall that given a subset , a map is a distance function if (i) it is symmetric, (ii) it satisfies the triangle inequality for , and (iii) . If there is no ambiguity on the nature of the set , we omit the subscript and simply write .

Definition 1 (Condition number)

Given a function , where and and two distance functions and defined on and respectively, the condition number of at is

If is an isolated point for the topology induced by the metric on , then we set by convention . We also define

The in Definition 1 is always defined since we accept the value .

The condition number is a geometric invariant that depends on the function but not on the algorithm we use to compute it. There is also an explicit dependence of the condition number on the choice of the distance functions and . In addition, condition numbers satisfy the following sensible composition inequality.

Lemma 1

If is a composition of functions, then

(1)

Proof

See Section A.1

The choice of the distances in Definition 1 depends on the context. As real arithmetic operations are substituted in a numerical algorithm by floating-point operations in the standard model of floating-point arithmetic [21], this results in small relative forward errors. Therefore, we focus on a metric that measures relative errors.

Let first us clarify what we mean by relative error. Usually, in numerical analysis, the relative error of an approximation to an exact value is defined as . Unfortunately, this does not define a distance on because symmetry and the triangle inequality can fail. Instead, we adopt the coordinatewise relative error metric that was already introduced by Pryce [29] in 1984, building on Olver’s Rp error analysis [27, 28]. Herein, the distance in between and is defined as the length of the shortest curve joining and , where the length is ultimately determined by the choice of a Riemannian metric on that captures the concept of relative errors. This coordinatewise relative error metric and the surprising topology it induces on is stated next.

Definition 2 (Coordinatewise relative error metric [29])

The topology associated with the coordinatewise relative error in is defined by connected components in , corresponding to the sign pattern (an element of ) of each vector. The connected component consists of just one point. On each of the other components , we define a Riemannian metric as follows. Let be a pair of tangent vectors lying in the subspace spanned by the elements of and based at the point . We associated with them the inner product

where contains the nonzero indices of , which defines the tangent space to at . The induced norm of a tangent vector is . The length of a curve is then defined by

where is the derivative of at , and is the th component function of . The distance between any pair of points is if , if and with lie on different connected components, and

(2)

otherwise. The infimum is taken over all curves contained in the connected component with extremes and . This distance function satisfies the axioms of distance, including the triangle inequality for all .

From now on whenever we refer to we assume that it is endowed with the topology and the metric structure of Definition 2.

The infimum of (2) is actually a minimum; that is, there exists a curve with extremes such that its length is equal to . This curve is called a minimizing geodesic between these two points. The existence follows from the Hopf–Rinow Theorem, see for example [19, Theorem 1.10]. In , the relative error distance between two points and with the same sign is either or

If and do not have the same sign, then . The coordinatewise relative error metric in is the product metric of when the latter is endowed with the relative error metric. Hence, the squared distance between two points and is In particular, for we get after some manipulations that if for some , then . The following fact will also be helpful.

Lemma 2

Let be as in Definition 1. Let . If there is a minimizing geodesic joining and and such that for all , then

Proof

See Section A.2.

With the metric structure clarified, we further characterize the condition number from Definition 1. Recall that for

with singular value decomposition

, the Moore–Penrose pseudoinverse of can be defined as where is obtained by transposing and changing the non-zero elements of to

. The pseudoinverse of the zero matrix

is .

Proposition 1 (Condition number in coordinatewise relative error)

Let be a differentiable map with open, and let the connected components be as in Definition 2. Let be the condition number from Definition 1 with respect to the coordinatewise relative error metric on both the domain and codomain. Then, the following holds:

  1. if and there exists an open neighborhood of such that is contained in one connected component of , then

    (3)

    where is the spectral -norm, is the derivative of at , and is the diagonal matrix whose entries are the elements of a vector . In particular, if takes an open neighborhood of to a constant, then .

  2. in all other cases .

Proof

See Section A.3.

Remark

Item (ii) in Proposition 1 corresponds to , for some and yet for some arbitrarily close to . In all other cases, recalling also that is an isolated point in our chosen topology, item (i) applies, and then the presence of the Moore–Penrose pseudoinverse in (3) implies that only the non-zero components of and contribute to .

Two important special cases of equation creftype 3 arise for and , simplifying to respectively

For univariate scalar-valued functions, i.e., , both reduce to the familiar expression of the relative condition number

(4)
Remark

It follows from Proposition 1 that, if for some , then either or in a neighborhood of . In particular, this clarifies the special values in the renowned formula (4):

  • If then , regardless of the value of and .

  • If and in an open neighborhood of , then .

  • If , but is not constantly in an open neighborhood of , then even if

3 A formal model of numerical algorithms

Informally, an algorithm is a sequence of instructions that can be programmed in any programming language, like C, Julia, Matlab, or Python. The formal modern–classic definition of an algorithm that we adopt in this paper is the BSS model, proposed by Blum, Shub, and Smale [9] and developed by the same authors and Cucker in [8]; it is recalled in Appendix B.

3.1 Scalable functions

The BSS model takes into account that many algorithms, especially those in numerical linear algebra, are scalable: they can be applied to problems of arbitrary dimension. In practice, mathematical algorithms like computing matrix factorizations, e.g., QR, LU, and SVD, of matrices are usually implemented for all , rather than having separate implementations for each .

We thus reconsider the concept of a function by letting it have input and output of varying dimensions: we call this a scalable function. By we denote the disjoint union of sets.

Definition 3 (A scalable function)

A scalable function is of the form

where and are sequences, either both infinite or both finite and with the same length.

When applied to the function can be denoted by , but for brevity we simply write . From now on we denote , that is the maximum of the dimension of the input and the output for every , since in practice an element of is represented by a vector in .

3.2 Standard floating-point arithmetic

Next, we recall the formal properties usually assumed in floating-point arithmetic [21]. A floating-point number system with base is a subset of the real numbers of the form

where , , is the precision and , is bounded. Moreover, the mantissa is either or satisfies , ensuring a unique representation. In other words, if or is of the form for some in the range and , . The unit roundoff is .

To achieve an axiomatization that is tame enough to prove theorems, the first mathematical concession that we must make is to relax the boundedness of . In other words, for the theory below we assume that and . With this assumption the floating-point system is defined by either the unit roundoff or the precision . Therefore we will denote by .

We denote by the roundoff map that takes to the number in that is closest to

in absolute value. Ties can be broken by the usual schemes, such as rounding to odd, to even, or away from

. Our results do not depend on this choice. From now on we assume that maps exist for all .

With these definitions and assumptions, the floating-point number system satisfies the well-known axioms:

  • For all , .

  • For all , for some real .

  • For all , .

  • For all , .

  • If and , then .

  • If and , then .

  • For every operation , there is a corresponding floating-point operation such that for all we have that

    with the unique exception of division by that produces either or .

3.3 Numerical algorithms

Following [14, 25], we define a numerical algorithm as an algorithm in the BSS model where approximate computations of the algorithm are obtained by executing it in the foregoing model of floating-point arithmetic. We recall the formal definitions in Appendix B.

4 A formal model of numerical stability

In [14] and [25] the formal definition of numerical algorithm served to study complexity classes, but these works do not define numerical stability in this formal setting. In this section we fill this gap.

We denote a specific BSS machine that attempts to implement a function by . The output (if any) of the corresponding numerical algorithm on input is denoted by . This emphasizes that the algorithm tries to approximate a function and that the computations are carried out in .

The concept of numerical stability was introduced to classify which of the numerical algorithms

can be claimed to implement a given function , and to specify which ones are more accurate than others. The existing literature [21] gives flexible and informal definitions that involve “small errors” whose magnitude is deemed to depend on the context. In our context of numerical algorithms with arbitrary precision and scalable functions, we believe that the magnitude of errors should be viewed relative to the size of the input and output ambient spaces and , respectively. Consequently, having defined , the definitions of numerical stability below allows errors to grow with , but at most polynomially.

Backward and mixed stability [21, Section 1.5] are defined independently of the condition number of the computational problem.

Remark

In the rest of the paper we will use “constants” depending polynomially on . We tacitly assume without loss of generality that all these polynomials in are greater than and monotonically non-decreasing. We did not optimize our proofs to yield the smallest possible polynomials, rather preferring clarity of exposition.

Definition 4 (Backward stability)

Let be a scalable function and a BSS machine. Then, is a backward stable algorithm for if there exist polynomials and such that for all ,

In particular, an output must be produced for all such choices of and .

Definition 5 (Mixed stability)

Let be a scalable function and a BSS machine. Then, is a mixed stable algorithm for if there exist polynomials , , and such that for all

In particular, an output must be produced for all such choices of and .

The final classic notion of stability is forward stability, which just says that an algorithm is stable if its output is close to the exact value , with the caveat that this distance may depend linearly on the condition number at .

Definition 6 (Forward stability)

Let be a scalable function and a BSS machine. Then, is a forward stable algorithm implementing if there exists a stability polynomial such that either it holds for all that

or, equivalently, it holds for all and all that

In particular, an output must be produced for all such choices of and . If , then the foregoing implications are vacuous and a forward stable numerical algorithm may either output any value, or not halt and output no value at all, for .

It it is not hard to verify that the two conditions in Definition 6 are equivalent. The reason for giving two alternatives is twofold. First, each of them will be useful in our analyses. Second, they carry a different philosophy: the first criterion shows that given any sufficiently small unit roundoff a forward stable algorithm guarantees a certain accuracy, while the other shows that for every wanted accuracy one can find a unit roundoff such that a forward stable algorithms achieves the required accuracy.

The reason to use instead of in Definition 6 is that for some problems may be equal to or a very small number, and thus asking the error to depend linearly on is just too much to be a realistic demand. We believe that most of the algorithms usually regarded as stable in numerical analysis satisfy Definition 6.

5 Stability of numerical algorithms for amenable problems

This section introduces the five fundamental innovations of this paper. First, we present the class of amenable computational problems. These are scalable functions that simultaneously admit a non-uniform bounded growth of the condition number and are well-behaved near the boundaries of their domains. Second, we propose a compatibility condition for amenable functions and , under which we can prove amenability of the composition . Third, we prove the main theorem: composing two forward stable numerical algorithms and that respectively implement compatible amenable functions and results in a forward stable algorithm implementing the amenable function . Fourth, under amenability a helpful chain of implications arises wherein backward implies mixed implies forward stability. Fifth, for a differentiable problem , proving forward stability of in the relative error metric reduces to establishing the stability of all component functions for computing the ’s.

5.1 Amenable problems

The main goal of this article is to facilitate the composition of stable numerical algorithms, resulting in a new stable algorithm. Consider two scalable functions and that can be composed. We identified three obstacles that seem to prevent unbridled composition of forward stable numerical algorithms and implementing respectively and :

  1. is no longer well-defined for some or even all ;

  2. the condition number of either or grows uncontrollably;

  3. the maximum of the condition numbers and is significantly larger than .

The issue with the first item is clear. The second obstacle may prevent the condition number from being useful since we want the condition number (a first order variation estimator) to provide reasonable bounds for moderately small values of . The third obstacle is behind the instability of the naive method to solve an overdetermined least-squares problem discussed in the introduction, and was even exploited in [26] to prove forward instability of certain resultant-based methods to solve systems of polynomial equations, and in [3]

to prove forward instability of pencil-based methods for computing tensor rank decompositions.

Observe that the second and third obstacles are formulated independently of the algorithms and . Remarkably, the first obstacle can also be avoided by placing suitable restrictions only on the functions and . The central idea of our notion of amenability is to prevent the occurrence of the first two obstacles.

Definition 7 (Amenable function)

A scalable function is amenable if there exists an amenability polynomial such that:

  1. For all , the ball

    is contained in .

  2. For all we have .

Remark

Sometimes we want to deal with functions that are not scalable. In these cases we still talk about amenability but the polynomial becomes just a constant. The same applies if a function is defined on where runs over a finite set.

For all points where , the foregoing conditions are automatically satisfied. Thus, it suffices to verify amenability for all points where is finite.

The following lemma will be helpful to check if a given function is amenable.

Lemma 3

Given a scalable function , assume that there is a polynomial such that for all the following properties hold:

  1. Let be a sequence such that

    where is the boundary of and is the ill-posed locus. Then, .

  2. Let be the set where the condition number is finite and let . The condition number of satisfies

    (5)

Then, is amenable with amenability polynomial .

Proof

See Section A.4.

Remark

If is given by a smooth formula, then (5) is satisfied if

(6)

for some polynomial .

It follows immediately in the notation of Lemma 3 that each is an open subset of and that the restriction is continuous for all .

Note that creftype 6 essentially bounds the gradient of the condition number as the square of the latter. Many condition numbers in linear algebra satisfy such a bound [16, 20].

As said in Section 2, once we have endowed with the coordinatewise relative metric, its induced topology has connected components. Fortunately, our definition of amenability takes care of that, as can be easily proved.

Lemma 4

Let be endowed with any metric and consider its metric topology. If is amenable for a finite indexed collection of open sets , then is amenable, where Moreover, if all the admit the same amenability polynomial, then so does , even if runs through an infinite set.

5.2 The compatibility condition

The third obstacle for stably composing numerical algorithms is decomposing a well-conditioned computational problem into a composition where either or is poorly conditioned (compared to ). This can lead to a forward unstable algorithm [26, 3]. Based on this observation, we propose the following compatibility condition that excludes this possibility.

Definition 8 (Compatibility condition)

Let be scalable functions such that the composition is well defined. Graphically, for

We say that are compatible if there exist compatibility polynomials such that:

  1. For all , we have , where . In other words, the intermediate dimension is polynomially bounded by the maximum of the input and the output dimensions of the composition.

  2. For all and there holds

    (C)

    That is, creftype 1 can be reverted up to a factor which is polynomial in .

The compatibility condition guarantees that the composition of two amenable functions is again amenable, as is shown next.

Proposition 2

If and are compatible amenable functions, then is also amenable.

Proof

Let and be the amenability polynomials of respectively and , and let be the polynomials appearing in the definition of compatibility. We will prove that is amenable with amenability polynomial . Without loss of generality, we can assume that .

For any given , the input and output dimensions of a function are denoted by respectively, and . We thus have from the compatibility, which implies

having exploited that all polynomials are non-decreasing by assumption. It suffices to verify amenability for all such that is finite. It follows from compatibility that both and are finite.

Let , let be such that

Then, by using (A.1) for . Hence, is in the domain of and (A.1) for holds. Moreover, by (A.2) for . Similarly, for all it follows from Lemma 2 that

so that by (A.2) for . Consequently,

Thus (A.2) holds for as well, concluding the proof.

5.3 The fundamental theorem

Now we can prove the main result of this article on the composition, or concatenation, of forward stable numerical algorithms in the setting of compatible amenable computational problems.

Theorem 1

Let and be compatible amenable functions. For all forward stable algorithms and implementing respectively and , the composition is a forward stable algorithm implementing .

Proof

Let and with . Denote , and . Let and be the amenability polynomials from Definition 7 for and , respectively. Let and be the stability polynomials from Definition 6 of and , respectively. Finally, let be the compatibility polynomials from Definition 8 for and , which by compatibility implies . We will show that the theorem holds with stability polynomial

When , there is nothing to check. Thus, in the remainder of the proof we can assume by compatibility that both and are finite.

Since and are compatible and is forward stable, we have

(7)

where in the second use of creftype C the consequent was first bounded by . We know that the ball with radius centered at is contained in . Consequently, is well defined and a minimizing geodesic from to exists. Moreover, from (A.2) for it follows that

(8)

From Lemma 2, we have

(9)

From creftype 8 above, . This implies

From the forward stability of , it follows that is well defined and