Instances of Computational Optimal Recovery: Refined Approximability Models

04/01/2020 ∙ by Simon Foucart, et al. ∙ 0

Models based on approximation capabilities have recently been studied in the context of Optimal Recovery. These models, however, are not compatible with overparametrization, since model- and data-consistent functions could then be unbounded. This drawback motivates the introduction of refined approximability models featuring an added boundedness condition. Thus, two new models are proposed in this article: one where the boundedness applies to the target functions (first type) and one where the boundedness applies to the approximants (second type). For both types of model, optimal maps for the recovery of linear functionals are first described on an abstract level before their efficient constructions are addressed. By exploiting techniques from semidefinite programming, these constructions are explicitly carried out on a common example involving polynomial subspaces of C[-1,1].

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The objective of this article is to uncover practical methods for the optimal recovery of functions available through observational data when the underlying models based on approximability allow for overparametrization. To clarify this objective and its various challenges, we start with some background on traditional Optimal Recovery. Typically, an unknown function defined on a domain  is observed through point evaluations at distinct points . More generally, an unknown object , simply considered as an element of a normed space , is observed through

(1)

where are linear functionals defined on . We assume here that these data are perfectly accurate — we refer to the companion article [5] for the incorporation of observation error. The data is summarized as , where the linear map is called observation operator. Based on the knowledge of , the task is then to recover a quantity of interest , where throughout this article is assumed to be a linear functional. The recovery procedure can be viewed as a map from to , with no concern for its practicability at this point.

Besides the observational data (which is also called a posteriori information), there is some a priori information coming from an educated belief about the properties of realistic ’s. It translates into the assumption that belongs to a model set . The choice of this model set is of course critical. When the

’s indeed represent functions, it is traditionally taken as the unit ball with respect to some norm that characterizes smoothness. More recently, motivated by parametric partial differential equations, a model based on approximation capabilities has been proposed in 

[2]. Namely, given a linear subspace of and a threshold , it is defined as

(2)

This model set is also implicit in many numerical procedures and in machine learning.

Whatever the selected model set, the performance of the recovery procedure is measured in a worst-case setting via the (global) error of over , i.e.,

(3)

Obviously, one is interested in optimal recovery maps minimizing this worst-case error, i.e., such that

(4)

This infimum is called the intrinsic error of the observation map (for over ). It is known, at least since Smolyak’s doctoral dissertation [12], that there is a linear functional among the optimal recovery maps as soon as the set is symmetric and convex, see e.g. [10, Theorem 4.7] for a proof. The practicality of such a linear optimal recovery map is not automatic, though. For the approximability set (2), Theorem 3.1 of [4] revealed that such a linear optimal recovery map takes the form , where is a solution to

(5)

an optimization problem that can be solved for in exact form when the observation functionals are point evaluations (see [4]) and in approximate form when they are arbitrary linear functionals (see [5] or Subsection 3.2 below).

The approximability set (2), however, presents some important restrictions. Suppose indeed that there is some nonzero . Then, for a given observed through , any , , is both model-consistent (i.e., ) and data-consistent (i.e., ), so that the local error at of any recovery map satisfies

(6)

which is generically infinite. Thus, for the optimal recovery problem to make sense under the approximability model (2), one must assume that . By a dimension argument, this imposes

(7)

In other words, we must place ourselves in an underparametrized regime for which the number of parameters describing the model does not exceed the number

of data. This contrasts with many current studies, especially in the field of Deep Learning, which emphasize the advantages of overparametrization. In order to incorporate overparametrization in the optimal recovery problem under consideration, we must then restrict the magnitude of model- and data-consistent elements. A glaring strategy consists in altering the approximability set (

2). We do so in two different ways, namely by considering a bounded approximability set of the first type, i.e.,

(8)

and a bounded approximability set of the second type, i.e.,

(9)

We will start by analyzing the second type of bounded approximability sets in Section 2 by formally describing the optimal recovery maps before revealing on a familiar example how the associated minimization problem is tackled in practice. The main ingredient in essence belongs to the sum-of-squares techniques from semidefinite programming. Next, we will analyze the first type of bounded approximability sets in Section 3

. We will even formally describe optimal recovery maps over more general model sets consisting of intersections of approximability sets. On the prior example, we will again reveal how the associated minimization problem is tackled in practice. This time, the main ingredient in essence belongs to the moment techniques from semidefinite programming. In view of this article’s emphasis on computability issues, all of the theoretical constructions are illustrated in a reproducible

matlab file downloadable from the author’s webpage.

2 Bounded approximability set of the second type

We concentrate in this section on the bounded approximability set of the second type, i.e., on

(10)

We shall first describe optimal recovery maps before showing how they can be computed in practice.

2.1 Description of an optimal recovery map

The result below reveals how [4, Theorem 3.1] extends from the model set (2) to the model set (10).

Theorem 1.

If is a linear functional, then an optimal recovery map over the bounded approximability set (10) is the linear functional

(11)

where the optimal weight are precomputed as a solution to

(12)
Proof.

Since the model set (10) is symmetric and convex, there exists an optimal recovery map which is linear, i.e., of the form

. The vector

minimizes in particular the worst-case error among all . Thus, it is sufficient to transform this worst-case error into the expression featured between square brackets in (12). This is done by writing

(13)

By homogeneity, the latter is readily seen to coincide with the required expression. ∎

Remark.

The approximability set (2) where the condition is not imposed can be viewed as an instantiation of (10) with . In this instantiation, if was nonzero, then the objective function would be infinite. Therefore, the infimum will be attained with the constraint in effect. This argument constitutes another way of deriving the form of the optimal recovery map over the original approximability set (2). Let us note in passing that, while the optimization program (5) was independent of , adding the condition does create a dependence on in the optimization program (12), unless is proportional to .

Remark.

In the presence of observation error in , modeled as in [5] by the bounded uncertainty set

(14)

an optimal recovery map for a linear functional over and simultaneously still consists of a linear functional , but now the optimal weights are solution to the optimization program

(15)

where is the conjugate exponent to . The argument, which follows the ideas presented in [5], is left to the reader. We do point out that the program (15) is solvable in practice as soon as soon as the program (12) itself is solvable in practice, for instance as in the forthcoming example.

2.2 Computational realization for

For practical purposes, the result of Theorem 1 is close to useless if the minimization cannot be performed efficiently. We show below that in the important case , choosing as the space of algebraic polynomials of degree leads to an optimization problem which can be solved exactly via semidefinite programming. For that, we also assume that the observation functionals are distinct point evaluations and that the quantity of interest is another point evaluation or the normalized integral. These restrictions can be lifted if we trade exact solutions for quantifiably approximate solutions, see Subsection 3.2. In the statement below, the notation represents the symmetric Toeplitz matrix built from a vector , i.e.,

(16)

and the polynomials , , denote the th Chebyshev polynomials of the first kind.

Theorem 2.

Assuming that and that are point evaluations at distinct points , an optimal recovery map over the bounded approximability set (10) for the quantity of interest , , or is the linear functional

(17)

where the optimal weights are precomputed as a solution to the semidefinite program

(18) subject to
and

Here, and have entries and , , .

Proof.

The work consists in recasting the objective function of (12) into manageable form. Under the assumptions on and on , the first term is not a problem, by virtue of

(19)

We now turn to the second term, i.e., the one involving the maximum over the unit ball of . The idea, common in Robust Optimization [1], relies on duality to change the maximum into a minimum, which is then integrated into a larger minimization problem. This is possible essentially when admits a linear or semidefinite description, which is the case for . Indeed, as already observed in [6, Subsection 5.3], following ideas formulated in [8], the unit ball of admits the semidefinite description

(20)

where, for each , the symmetric matrix

(21)

has ’s on the th subdiagonal and superdiagonal and ’s elsewhere — in particular is the identity matrix. Thus, for a fixed , with , the maximum over reads

(22)

Invoking duality in semidefinite programming (see e.g. [3, p.265-266]), the latter can be transformed into

(23)

Since for any , the constraint in (23) can be condensed to . Then, combining the minimization over with the minimization over , the optimization program (12) becomes equivalent to

(24)

The final step is the introduction of slack variables such that , i.e., , for all . ∎

3 Bounded approximability set of the first type

We concentrate in this section on the bounded approximability set of the first type, i.e., on

(25)

Once again, we shall first describe optimal recovery maps before showing how they can be computed in practice.

3.1 Description of an optimal recovery map

The result below reveals how [4, Theorem 3.1] extends from the model set (2) to the model set (25).

Theorem 3.

If is a linear functional, then an optimal recovery map over the bounded approximability set (25) is the linear functional

(26)

where the optimal weights are precomputed as a solution to

(27)

As a matter of fact, Theorem 3 is a corollary of Theorem 4 below. The setting of the more general result involves subspaces of a linear space equipped with possibly distinct norms . The model set is then defined, for some parameters , by

(28)

It corresponds to what was called the multispace problem in [2, Section 3]. One works under the assumption that

(29)

This assumption holds for the bounded approximability set of the first type, obtained by taking , , and .

Theorem 4.

If is a linear functional, then an optimal recovery map over the model set (28) is the linear functional

(30)

where the optimal weights are precomputed as a solution to

(31) subject to
and
Proof.

We first notice that, replacing the norms by , we can assume that . Next, since the model set is symmetric and convex, there exists an optimal recovery map which is linear, i.e., of the form . An optimal weight vector is then obtained as a solution to the optimization problem

(32)

We claim that an optimal weight vector can also be obtained as a solution to the optimization problem

(33)

In other words, we shall prove in two steps that the minimal values of (32) and (33) coincide.

Firstly, we shall justify that the objective function in (32) is bounded by the objective function in (33) — a property which holds independently of . To do so, let us consider such that for some , , . Let us also consider such that and , , . We have

(34)

Taking the infimum over and the supremum over yields the desired result.

Secondly, we shall justify that the minimal value of (33) is bounded by the minimal value of (32). To do so, let us consider the linear space equipped with the norm

(35)

Introducing the subspace of given by

(36)

the assumption (29) is equivalent to . Thus, we can define a linear functional on by

(37)
(38)

Let then denote a Hahn–Banach extension of to the whole . With linear functionals defined for each and by , where appears at the th position, we have for all , hence in particular vanishes on . This implies (see e.g. [11, Lemma 3.9]) that for some . In other words, the first constraint in (33) is satisfied by and . The second constraint is also satisfied: indeed, for , since . Therefore, the minimal value of (33) is bounded by

(39)

The latter equals the norm of on , by virtue of being a Hahn–Banach extension of , so that

(40)

It follows that, for any ,

(41)

Taking the minimum over all shows that is less than or equal to the minimal value of (32), and in turn that the same is true for the minimal value of (33). ∎

Remark.

The approximability set (2) where the condition is not imposed can be viewed as an instantiation of (25) with . In this instantiation, if was nonzero, then the objective function in (27) would be infinite. Therefore, the minimum will be attained with the constraint in effect, leading to and in turn to the constraint . We do retrieve the minimization of (5), as expected. We note in passing that, while the optimization program (5) was independent of , adding the condition does create a dependence on in the optimization problem (27), unless is proportional to .

Remark.

In the presence of observation error in , again modeled as in [5] by the bounded uncertainty set

(42)

an optimal recovery map for a linear functional over and simultaneously still consists of a linear functional , but now the optimal weights are solution to the optimization program

(43)

The argument follows the ideas presented in [5] and, although more subtle, is once again left to the reader. We do point out that the program (43) is solvable in practice as soon as soon as the program (27) itself is solvable in practice, for instance as in the forthcoming example.

3.2 Computational realization for

As before, the high-level results of Theorems 3 and 4 are of little practical use if the minimizations (27) and (31) cannot be performed efficiently. In the important situation , the dual functionals appearing as optimization variables are identified with measures. Despite involving infinite dimensional objects, minimizations over measures can be tackled via semidefinite programming, see e.g. [9]

. Although such minimizations are in general not solved exactly, their accuracy can be quantifiably estimated in our specific case. For ease of presentation, we illustrate the approach by concentrating on the optimization program (

27) rather than (31). We also assume that and we write the observation functionals , as well as the quantity of interest , as

(44)

for some signed Borel measures defined on . In this way, passing from linear functionals to signed Borel measures as optimization variables, the program (27) reads

(45) subject to
and

Let us introduce as slack variables the nonnegative Borel measures , , , and involved in the Jordan decompositions and , so that the problem (45) is recast as

(46) s.to
and

Next, replacing the measures and by the infinite sequences of moments and of moments defined for by

(47)

the problem (46) is equivalent222the equivalence is based on the discrete trigonometric moment problem, see [7] for details. to the infinite semidefinite program

(48) s.to
and
and

Instead of solving this infinite optimization program, we truncate it to a level and solve instead the resulting finite semidefinite program

(49) s.to
and
and