Box-constrained monotone L_∞-approximations and Lipschitz-continuous regularized functions

03/20/2019 ∙ by Eustasio del Barrio, et al. ∙ 0

Let f:[0,1]→[0,1] be a nondecreasing function. The main goal of this work is to provide a regularized version, say f̃_L, of f. Our choice will be a best L_∞-approximation to f in the set of functions h:[0,1]→[0,1] which are Lipschitz-continuous, for a fixed Lipschitz norm bound L, and verify the boundary restrictions h(0)=0 and h(1)=1. Our findings allow to characterize a solution through a monotone best L_∞-approximation to the Lipschitz regularization of f. This is seen to be equivalent to follow the alternative way of the average of the Pasch-Hausdorff envelopes. We include results showing stability of the procedure as well as directional differentiability of the L_∞-distance to the regularized version. This problem is motivated within a statistical problem involving trimmed versions of distribution functions as to measure the level of contamination discrepancy from a fixed model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction.

Let us briefly motivate the problem. In his seminal paper [6], Huber introduced the contamination neighbourhood of a probability, becoming one of the very basis of Robust Statistics. An (

-) contamination neighbourhood of a probability distribution

is the set of probability distributions

(1)

where is the set of all probability distributions in the space. Although it can be defined in a wholly general setting, throughout the paper will be the set of probabilities on the (Borel) sets, , of the real line ). In this way, given an “ideal” model , the vicinity includes those probabilities which are distorted versions of the model through gross or rounding errors: Given a particular value , a probability in would generate samples with an approximate percentage of data coming from . A dual point of view would consider that a such sample could be suitably “trimmed” as to obtain a right sample from the model. In fact, even would arose from an appropriate trimming of .

The introduction of general trimmings of a probability goes back at least to [5]. A probability is said to be a trimming of level of whenever there exists a down-weighting function such that and for all the sets . Equivalently, it must be absolutely continuous w.r.t. , with Radon-Nykodim derivative bounded by . The set of -trimmings of the probability distribution will be denoted by :

(2)

and the key link between (1) and (2), obtained in [2], is given by

(3)

The drawback of both approaches is that in a realistic statistical setting we know neither the value nor the “contaminated” distribution . We just dispose of an approximation to ; usually will be the sample distribution associated to a data set, and our goal is to search for statistical evidence, on the basis of for or against the hypothesis . For such task we can resort to a metric, , on and consider

as an estimator of

. With the introduction of this distance, adjusting the trim level accordingly, we measure to what extent our distribution, , can be considered as a contaminated version of the model, . The success of this strategy will strongly depend on the suitability of the metric for this task. Our choice here is the Kolmogorov distance, , that for two probabilities is defined by the -distance between their distribution functions and . In Section 2 we will present some alternative characterization of the set as well as its main topological properties in this setting. In particular we will show (see Lemma 2.4) that, by defining , and being the distribution functions of and , with great generality, the following identity holds:

(4)
where (5)

Here, as will be used throughout, for any real valued mapping defined on a metric space , with and we will denote the and the Lipschitz norms:

Therefore, (4) translates the problem to finding a useful expression for a best -approximation to a monotone function by Lipschitz-continuous functions verifying the boundary conditions . This goal should include a computably feasible characterization of (4), as to be used for statistical purposes. Both objectives will be obtained in Theorem 2.5, although the proof will be given through Section 3. There, we will show how the Pasch-Hausdorff envelopes (see [7]) of a monotone function preserve monotonicity and provide the basis to build a best -approximation verifying the boundary constraints. We will also relate this process with the alternative way of obtaining the Ubhaya’s monotone -best approximation (see [9, 10]) to the Lipschitz regularization of the objective function. This approach is followed in Section 4. Finally, we must highlight our results on stability of the constrained regularization (see Proposition 2.2) as well as on directional differentiability of the -distance to the regularized version (see Theorem 4.4), where the last approach is better suited. The relevance of this type of results on differentiability has been pointed out in [8], and recently highlighted in relation with statistical applications in [4]. In fact, these results provide a sound mathematical foundation allowing incoming statistical applications of the proposed methodology.

2 The set of trimmings in the -topological setting

Since probabilities on are determined by their distribution functions (d.f.’s in the sequel) and (1) and (2) can be equivalently stated in terms of the corresponding distribution functions, we will use the same notation and , with the same meanings as before, but defined in terms of distribution functions. On the other hand, the Kolmogorov distance between probabilities is defined just through the -distance between the corresponding d.f.’s, but we will often keep the notation for this distance.

The set can be also characterized, as shown in [1] (see also Proposition 2.2 in [2] for a more general result), in terms of the set of -trimmed versions of the uniform probability . Notice that this set is just , as defined in (5). The parameterization, obtained through the composition of the functions and : gives

(6)

We note that, as a consequence, the “trimmed Kolmogorov distance” from to is

The set is convex and also well behaved w.r.t. weak convergence of probabilities and widely employed probability metrics (see Section 2 in [2]). We show next that this also holds for .

Proposition 2.1

For and distribution functions , and , we have:

  • is compact w.r.t. .

  • .

Proof. By the Ascoli-Arzelà Theorem, is a compact subset of the space of continuous functions on endowed with the uniform norm. Hence, from any sequence of elements in , say (recall (6)), we can extract a uniformly convergent subsequence . But then, obviously, in , which proves (a). Since, on the other hand,

we see that the map is continuous and, consequently, it attains its minimum in , as claimed in (b). Finally, to check (c) we note that

and

(8)

Now, (2) and (8) yield (c).  

Proposition 2.1 guarantees the existence of optimal -approximations to every distribution function by -trimmed versions of :

(9)

It also shows, through (3), that for

(10)

Moreover, by convexity of , the set of optimally trimmed versions of associated to problem (9) is also convex. However, guarantying uniqueness of the minimizer (as it holds w.r.t. - Wasserstein metric by Corollary 2.10 in [2]) is not possible here.

An additional consequence of Proposition 2.1 is the continuity of in and . We quote this and some additional facts in our next result.

Proposition 2.2

For , if and are d.f.’s such that then:

  • for every there exist such that

  • if , then there exists some -convergent subsequence . If is the limit of such a subsequence, necessarily .

  • if, additionally, and are d.f.’s such that then as

Proof. To prove a), since , with , it suffices to consider and recall that is Lipschitz. For b), we write and argue as in the proof of Proposition 2.1 to get a -convergent subsequence from which we easily get Finally c) is a direct consequence of Proposition 2.1 (c).  

By Polya’s uniform convergence theorem, if and are continuous and are sequences of d.f.’s which, respectively, weakly converge to , then they also converge in the -sense, therefore holds. Also, a direct application of the Glivenko-Cantelli theorem and item c) above guarantee the following strong consistency result.

Proposition 2.3

Let and be the sequence of empirical d.f.’s based on a sequence

of independent random variables with distribution function

. If is any sequence of distribution functions -approximating the d.f. (i.e. ), then:

Given a d.f. , we write

for the associated quantile function (or left continuous inverse function), namely,

. We recall that if

is a uniformly distributed

random variable, has d.f. . Similarly, if has a continuous d.f. , the composed function is the quantile function associated to the r.v. . As we show next, under some regularity assumptions can be expressed in terms of the function

. We will see later the usefulness of this fact both for the asymptotic analysis and the practical computation of

when is an empirical d.f. based on a data sample . Recall that then .

Lemma 2.4

Let . If are continuous d.f.’s and is additionally strictly increasing then

Proof. For the first identity observe that

On the other hand, if denote the ordered sample associated to (the same set of values but ordered in nondecreasing sense) and

Taking into account that and are piecewise constant while and are non decreasing and continuous, we obtain

and the other identity follows from Proposition 2.1, part (b).  

Our final result in this section provides a simple representation of (hence, of ). In this statement we assume that is a nondecreasing function taking values in (which is always the case if ). Note that taking right and left limits at 0 and 1, respectively, we can assume that is a nondecreasing (and left continuous) function from to .

Theorem 2.5

Let . Assume is a nondecreasing function. Define , , and

Then,

The proof of this result will be developed in Section 3. In fact Theorem 3.3 is just a rephrasing of this result. A look at that Theorem shows that is an element of such that , that is, is an optimal trimming function in the sense described above. We recall that we do not claim uniqueness of this minimizer, but this particular choice allows to compute for sample d.f.’s. Moreover, Theorem 2.5 even provides a simple way for the computation of for theoretical distributions. Let us see an illustration of this use.

Example 2.1 (Trimmed Kolmogorov distances in the Gaussian model.)

Consider the case , , where denotes the standard normal d.f., and . Here we have . We note that if and only if , where

(11)

To avoid cumbersome computations we focus on the cases , and , .

If and then is linear with positive slope and we see that if and only if . This means that is increasing in and decreasing in . Since, , we have that, for , where is (the unique) solution to , and for . We conclude that . The case can be handled similarly to obtain

(12)

We focus now on the case . If , is a parabola with negative leading coefficient and discriminant . Hence, is positive for with , . Equivalently, if and only if . This means that is increasing in , decreasing in , increasing in , and . Arguing as above, we have for , for , and . We conclude that . Hence,

If then we have that for all and . In particular, .

Finally, we consider the case . In this case is positive for with , . This means that for with . Therefore, is decreasing in , increasing in , decreasing in , and . Hence, , , , , , . In particular, , that is,

3 Best -approximations by Lipschitz-continuous functions with box constraints

In this section we refresh the notation. The role of will be played now by a generic Lipschitz constant ; our will be substituted by a bounded function , where is (at least at the beginning) a general metric space, while we maintain as the range of values. We will also use the notation (resp. ) for the maximum (resp. minimum) of both numbers (or functions). Regarding the Lipschitz norm, recall the trivial inequalities

(13)

The first lemma collects some basic properties on the role of the Pasch-Hausdorff envelopes of a function to obtain a Lipschitz-continuous best -approximation with constrained Lipschitz constant. For the sake of completeness, we will also include a simple proof.

Lemma 3.1

For a function , given a constant let us consider

  • This defines functions such that

  • is the pointwise largest function satisfying and . Likewise is the pointwise smallest function satisfying and .

  • The average satisfies and

    for any function such that

Proof. Part (i) follows directly from the definitions of and , because, for every :

To address part (ii) observe that, for arbitrary , the triangle inequality for the distance implies leading to the inequalities

thus to Now, if satisfies and then for : with equality if . Hence

Analogously, it follows from and that proving (ii).

As to part (iii), let Then and Consequently, by part (ii),

This implies that

whence

Since taking gives the announced equality

When is a real interval and is non-decreasing, the functions and in Lemma 3.1 share also that property and can be alternatively expressed in terms of the the Ubhaya’s monotone envelopes of the function . This is the content of the following lemma.

Lemma 3.2

Let be a real interval, equipped with the usual distance If is non-decreasing, then the functions in Lemma 3.1 are non-decreasing too, and for arbitrary and

where are the non-increasing functions

In particular,

(14)

Proof. The representations of and in terms of and follow from the fact that for arbitrary

where the inequalities follow from being non-decreasing. Note that both functions and are non-increasing, but adding the term to them leads to non-decreasing functions: For with isotonicity of implies that

and

because

Finally, let us include in the problem the boundary restrictions.

Theorem 3.3

Let be non-decreasing. For consider the function