1 Introduction
1.1 Background
In many scientific problems, the parameter of interest is a componentwise monotone function. In practice, an estimator of this function may have several desirable statistical properties, yet fail to be monotone. This often occurs when this estimator is obtained through the pointwise application of a statistical procedure over the domain of the function. For instance, we may be interested in estimating a conditional cumulative distribution function
, defined pointwise as , over its domain . Here, may represent an outcome and an exposure. The map is necessarily monotone for each fixed . In some scientific contexts, it may be known that is also monotone for each , in which case is a bivariate componentwise monotone function. An estimator of can be constructed by estimating the conditional distribution function for eachon a finite grid and performing suitable interpolation elsewhere. If estimation is performed separately for each exposure value, the resulting estimator may fail to be monotone as a function of
for certain values of .Whenever the function of interest is componentwise monotone, failure of an estimator to itself be monotone can be problematic. This is most apparent if the monotonicity constraint is probabilistic in nature – that is, the parameter mapping is monotone under all possible probability distributions. This is the case, for instance, if
is a distribution function. In such settings, returning a function estimate that fails to be monotone is nonsensical, somewhat like reporting a probability estimate outside the interval . However, even if the monotonicity constraint is based on scientific knowledge rather than probabilistic constraints, failure of an estimator to be monotone can be an issue. For example, if the parameter of interest represents average height or weight among children as a function of age, scientific collaborators would likely be unsatisfied if presented with an estimated curve that is not monotone. Finally, as we will see, there are often finitesample performance benefits to ensuring that the monotonicity constraint is respected.Whenever this phenomenon occurs, it is natural to seek an estimator that respects the monotonicity constraint but nevertheless remains close to the initial estimator, which may otherwise have good statistical properties. A monotone estimator can be naturally constructed by projecting the initial estimator onto the space of monotone functions with respect to some norm. A common choice is the norm, which amounts to using multivariate isotonic regression to correct the initial estimator.
1.2 Contribution and organization of the article
In this article, we discuss correcting an initial estimator of a multivariate monotone function by computing the isotonic regression of the estimator over a finite grid in the domain, and interpolating between grid points. We also consider correcting an initial confidence band by using the same procedure applied to the upper and lower limits of the band. We provide three results regarding this simple procedure.

Building on the results of Robertson et al. (1988) and Chernozhukov et al. (2009), we demonstrate that the corrected estimator is at least as good as the initial estimator, meaning:

its uniform error over the grid used in defining the projection is less than or equal to that of the initial estimator for every sample;

its uniform error over the entire domain is less than or equal to that of the initial estimator asymptotically;

the corrected confidence band contains the true function on the projection grid whenever the initial band does, at no cost in terms of average or uniform band width.


We provide highlevel sufficient conditions under which the uniform difference between the initial and corrected estimators is for a generic sequence .

We provide simpler lowerlevel sufficient conditions at the rate for use whenever the initial estimator is uniformly asymptotically linear.
We apply our theoretical results to two set of examples: nonparametric efficient estimation of a Gcomputed distribution function for a binary exposure, and kernelbased estimation of a conditional distribution function with a continuous exposure.
Other authors have considered the correction of an initial estimator using isotonic regression. To name a few, Mukarjee and Stern (1994) used a projectionlike procedure applied to a kernel smoothing estimator of a regression function, whereas Patra and Sen (2016) used the projection procedure applied to a univariate cumulative distribution function in the context of a mixture model. These articles addressed the properties of the projection procedure in their specific applications. In contrast, we provide general results that are applicable broadly.
We also note that the projection approach is not the only possible correction procedure. Dette et al. (2006), Chernozhukov et al. (2009), and Chernozhukov et al. (2010) studied a correction based on monotone rearrangements and compared the two procedures. However, monotone rearrangements do not generalize to the multivariate setting as naturally as projections – for example, Chernozhukov et al. (2009) proposed averaging a variety of possible multivariate monotone rearrangements to obtain a final monotone estimator. Daouia and Park (2013) also proposed a correction procedure that consists of taking a convex combination of upper and lower monotone envelope functions. They demonstrated that their estimator is asymptotically equivalent in supremum norm to the projectionbased correction, so application of their results requires studying the properties of the projection correction. In our view, the projection approach is preferable to strategies based either on rearrangements or enveloping because the projection is unique in both univariate and multivariate contexts. Additionally, projections are easy to understand, rely on familiar tools, and have minimal squarederror impact on the initial estimator among all candidate monotone corrections.
2 Main results
2.1 Definitions and statistical setup
Let be a statistical model of probability measures on a probability space . Let be a parameter of interest on , where and is the Banach space of bounded functions from to equipped with supremum norm . We have specified this particular for simplicity, but the results established here apply to any bounded rectangular domain . For each , denote by the evaluation of at and note that is a bounded realvalued function on . For any , denote by the evaluation of at .
For any vector
and , denote by the component of . Define the partial order on by setting if and only if for each . A function is called (componentwise) monotone nondecreasing if implies that . Denote for any vector . Additionally, denote by the convex set of bounded monotone nondecreasing functions from to . For concreteness, we focus on nondecreasing functions, but all results established here apply equally to nonincreasing functions.Let and suppose that is nonempty. Generally, this inclusion is strict only if, rather than being implied by the rules of probability, the monotonicity constraint stems at least in part from prior scientific knowledge. Also, define . We are primarily interested in settings where , since in this case there is no additional knowledge about encoded by , and in particular there is no danger of yielding a corrected estimator that is compatible with no .
Suppose that observations are sampled independently from an unknown distribution , and that we wish to estimate based on these observations. Suppose that, for each , we have access to an estimator of based on . We note that the assumption that the data are independent and identically distributed is not necessary for Theorems 1 and 2 below.
The central premise of this article is that may have desirable statistical properties for each or even uniformly in , but that as an element of may not fall in for any finite or even with probability tending to one. Our goal is to provide a corrected estimator that necessarily falls in , and yet retains the statistical properties of . A natural way to accomplish this is to define as the closest element of to in some norm on . Ideally, we would prefer to take to minimize over . However, this is not tractable for two reasons. First, optimization over the entirety of is an infinitedimensional optimization problem, and is hence frequently computationally intractable. To resolve this issue, for each , we let be a finite rectangular lattice in over which we will perform the optimization, and define and consider as the supremum norm over . While it is now computationally feasible to define as a minimizer over of the finitedimensional objective function , this objective function is challenging due to its nondifferentiability. Instead, we define
(2.1) 
The squarederror objective function is smooth in its arguments. In dimension , thus defined is simply the isotonic regression of on the grid , which has a closedform representation as the greatest convex minorant of the socalled cumulative sum diagram. Furthermore, since , many of our results also apply to .
We note that is only uniquely defined on . To completely characterize , we must monotonically interpolate function values between elements of . We will permit any monotonic interpolation that satisfies a weak condition. By the definition of a rectangular lattice, every can be assigned a hyperrectangle whose vertices are elements of and whose interior has empty intersection with . If multiple such hyperrectangles exist for , such as when lies on the boundary of two or more such hyperrectangles, one can be assigned arbitrarily. We will assume that, for , for weights such that . In words, we assume that is a convex combination of the values of on the vertices of the hyperrectangle containing . A simple interpolation approach consists of setting with the element of closest to , and choosing any such element if there are multiple elements of equally close to . This particular scheme satisfies our requirement.
Finally, for each , we let denote lower and upper endpoints of a confidence band for . We then define and as the corrected versions of and using the same projection and interpolation procedure defined above for obtaining from .
2.2 Properties of the projected estimator
The projected estimator is the isotonic regression of over the grid . Hence, many existing finitesample results on isotonic regression can be used to deduce properties of . Theorem 1 below collects a few of these properties, building upon the results of Barlow et al. (1972) and Chernozhukov et al. (2009). We denote as the mesh of in .
Theorem 1.

It holds that .

If and is continuous on , then .

If there exists some for which as , then .

Whenever for all , for all .

It holds that and .
Before presenting the proof of Theorem 1, we remark briefly on its implications. Part (i) says that the estimation error of over the grid is never worse than that of , whereas parts (ii) and (iii) say that the estimation error of on all of is asymptotically no worse than the estimation error of in supremum norm. Similarly, part (iv) says that the isotonized band never has worse coverage than the original band over . Finally, part (v) says that the potential increase in coverage comes at no cost to the average or supremum width of the bands over . We note that parts (i), (iv) and (v) hold true for each .
We now present the proof of Theorem 1.
Proof of Theorem 1.
Part (i) follows from Corollary B to Theorem 1.6.1 of Robertson et al. (1988). For parts (ii) and (iii), we note that by assumption
for every , where , and for each , and . By part (i), the first term is bounded above by . The second term is bounded above by , where we define . If is continuous on , then it is also uniformly continuous since is compact. Therefore, as , so that if . If as , then .
Part (iv) follows from the proof of Proposition 3 of Chernozhukov et al. (2009), which applies to any orderpreserving monotonization procedure. For the first statement of (v), by their definition as minimizers of the leastsquares criterion function, we note that , and similarly for . The second statement of (v) follows from a slight modification of Theorem 1.6.1 of Robertson et al. (1988). As stated, the result says that for any convex function and monotone function , where is the isotonic regression of over . A straightforward adaptation of the proof indicates that , where now and are the isotonic regressions of and over , respectively. As in Corollary B, taking and letting yields that . Applying this with and establishes the second portion of (v). ∎
While comprehensive in scope, Theorem 1 does not rule out the possibility that performs strictly better, even asymptotically, than , or that the band is asymptotically strictly more conservative than
. In order to construct confidence intervals or bands with correct asymptotic coverage, a stronger result is needed: it must be that
, where is a diverging sequence such that converges in distribution to a nondegenerate limit distribution. Then, we would have that converges in distribution to this same limit, and hence confidence bands constructed using approximations of this limit distribution would have correct coverage when centered around , as we discuss more below.We consider the following conditions on and the initial estimator :
 (A)

there exists a deterministic sequence tending to infinity such that, for all ,
 (B)

there exist such that for all .
Condition (A) is related to, but slightly weaker than, uniform stochastic equicontinuity (van der Vaart and Wellner, 1996, p. 37). (A) follows if, in particular, the process converges weakly to a tight limit in the space . However, the latter condition is sufficient but not necessary for (A) to hold. This is important for application of our results to kernel smoothing estimators, which typically do not converge weakly to a tight limit, but for which condition (A) nevertheless often holds. We discuss this at length in Section 3.2. Condition (B) constrains the variation of from both above and below, and is slightly more restrictive than a requirement for strict monotonicity. If, for instance, is differentiable, then (B) is satisfied if all firstorder partial derivatives of are bounded away from zero and above.
Based on these conditions, we have the following result.
Theorem 2.
If conditions (A)–(B) hold and , then .
We prove Theorem 2 below. This result indicates that the projected estimator is uniformly asymptotically equivalent to the original estimator in supremum norm at the rate . In addition to conditions (A)–(B), Theorem 2 requires that the mesh of tend to zero in probability faster than . Since is chosen by the user, this is not a problem in practice.
The lefthand side of the inequality in condition (B) excludes, for instance, situations in which is differentiable with null derivative over an interval. In such cases,
may have strictly smaller variance on these intervals than
because will pool estimates across the flat region while may not. Hence, in such cases, may potentially asymptotically improve on , so that and are not asymptotically equivalent at the rate .We prove Theorem 2 via three lemmas, which we first establish. The first lemma controls the size of deviations in over small neighborhoods.
Lemma 1.
If (A)–(B) hold and , then .
Proof of Lemma 1.
In view of the triangle inequality, we note that is bounded above by . The first term is by (A), whereas the second term is by (B).∎
The second lemma controls the size of neighborhoods over which violations in monotonicity can occur. Henceforth, we define
Lemma 2.
If conditions (A)–(B) hold, then .
Proof of Lemma 2.
Let and . Suppose that . Then, there exist with and such that . We claim that there must also exist with and such that . To see this, let , and note that . Define for , and set . Thus, and for each . Since then , it must be that for at least one . This proves the claim.
We now have that implies that there exist with and such that . This further implies that
by condition (B). Finally, this allows us to write
By condition (A), this probability tends to zero for every , which completes the proof.∎
Our final lemma bounds the maximal absolute deviation between and over the grid in terms of the supremal deviations of over neighborhoods smaller than .
Lemma 3.
The inequality holds.
Proof of Lemma 3.
By Theorem 1.4.4 of Robertson et al. (1988), for any ,
where, for any finite set , is defined as . The sets range over the collection of upper sets of containing , where is called an upper set if and implies . The sets range over the collection of lower sets of containing , where is called a lower set if and implies .
Let and . First, suppose there exists and with and . Then, we claim that there exists another lower set such that . If , then satisfies the claim. Otherwise, if , let . One can verify that , and since , is a strict subset of . Furthermore, by definition of , for all such that , and since , removing these elements from can only reduce the average, so that . This establishes the claim. By an analogous argument, we can show that if there exists and with and , then there exists another upper set such that .
Let and . Then
Hence, . By the above argument, and . Therefore,
and thus, . Taking the maximum over yields the claim.∎
2.3 Construction of confidence bands
Suppose there exists a fixed function such that and satisfy:
 (a)

,
 (b)

,
 (c)

.
As an example of a confidence band that satisfies conditions (a)–(c), suppose that is a scaling function and is a fixed constant such that, as tends to infinity,
If is an estimator of satisfying and is an estimator of such that , then the Waldtype band defined by lower and upper endpoints and satisfies (a)–(c) with . However, the latter conditions can also be satisfied by other types of bands, such as those constructed with a consistent bootstrap procedure.
Under conditions (a)–(c), the confidence band has asymptotic coverage . When conditions (A) and (B) also hold, the corrected band has the same asymptotic coverage as the original band , as stated in the following result.
Corollary 1.
If conditions (A)–(B) and (a)–(c) hold, is uniformly continuous on , and , then the confidence band has asymptotic coverage .
2.4 Special case: asymptotically linear estimators
Suppose now that the initial estimator is uniformly asymptotically linear: for each , there exists depending on such that , , and
(2.2) 
for a remainder term with . The function is the influence function of under sampling from . It is desirable for to have representation (2.2) because this immediately implies its uniform weak consistency as well as the pointwise asymptotic normality of for each . If in addition the collection of influence functions forms a Donsker class, converges weakly in to a Gaussian process with covariance function . Uniform asymptotic confidence bands based on
can then be formed by using appropriate quantiles from any suitable approximation of the distribution of the supremum of the limiting Gaussian process.
We introduce two additional conditions:
 (A1)

the collection of influence curves is a Donsker class;
 (A2)

is uniformly continuous in the sense that
Whenever is uniformly asymptotically linear, Theorem 2 can be shown to hold under (A1), (A2) and (B), as implied by the theorem below. The validity of (A1) and (A2) can be assessed by scrutinizing the influence function of for each . This fact renders the verification of these conditions very simple once uniform asymptotic linearity has been established.
Theorem 3.
For any estimator satisfying (2.2), (A1) and (A2) together imply (A).
3 Illustrative examples
3.1 Example 1: Estimation of a Gcomputed distribution function
We first demonstrate the use of Theorem 3 in the particular problem in which we wish to draw inference on a Gcomputed distribution function. Suppose that the data unit is the vector , where is an outcome, is an exposure, and is a vector of baseline covariates. The observed data consist of independent draws from , where is a nonparametric model.
For and , we define the parameter value pointwise as , the Gcomputed distribution function of evaluated at , where the outer expectation is over the marginal distribution of under . We are interested in estimating . This parameter is often of interest as an interpretable marginal summary of the relationship between and accounting for the potential confounding induced by . Under certain causal identification conditions, is the distribution function of the counterfactual outcome defined by the intervention that deterministically sets exposure to (Robins, 1986; Gill and Robins, 2001).
For each , the parameter is pathwise differentiable in a nonparametric model, and its nonparametric efficient influence function at is given by
where is the propensity score and is the conditional exposurespecific distribution function, as implied by (van der Laan and Robins, 2003). Given estimators and of and , respectively, several approaches can be used to construct, for each , an asymptotically linear estimator of with influence function . For example, the use of either optimal estimating equations or the onestep correction procedure leads to the doublyrobust augmented inverseprobabilityofweighting estimator
Comments
There are no comments yet.