    # Optimal estimation of variance in nonparametric regression with random design

Consider the heteroscedastic nonparametric regression model with random design Y_i = f(X_i) + V^1/2(X_i)ε_i, i=1,2,...,n, with f(·) and V(·) α- and β-Hölder smooth, respectively. We show that the minimax rate of estimating V(·) under both local and global squared risks is of the order {n^-8αβ/4αβ + 2α + β, n^-2β/2β+1}. This result extends the fixed design rate {n^-4α, n^-2β/(2β+1)} derived in Wang et al.  in a non-trivial manner, as indicated by the entanglement of α and β. In the special case of constant variance, we show that the minimax rate is n^-8α/(4α+1)∨ n^-1 for variance estimation, which further implies the same rate for quadratic functional estimation and thus unifies the minimax rate under the nonparametric regression model with those under the density model and the white noise model. To achieve the minimax rate, we develop a U-statistic-based local polynomial estimator and a lower bound that is constructed over a specified distribution family of randomness designed for both ε_i and X_i.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Consider the model

 Yi=f(Xi)+V1/2(Xi)εi,i=1,2,…,n, (1)

where are independent and identically distributed (i.i.d.) univariate random design points, and are i.i.d. with zero mean, unit variance, and are independent of . In this paper, we study the optimal estimation of under both local and global squared risks. Variance estimation is a fundamental statistical problem [Von Neumann, 1941, 1942; Rice, 1984; Hall et al., 1990] with wide applications. It is useful in, for example, construction of confidence bands for the mean function, estimation of the signal-to-noise ratio [Verzelen and Gassiat, 2018], and selection of the optimal kernel bandwidth [Fan, 1992].

When are fixed, estimation of in (1) has been studied in, for example, Muller and Stadtmuller , Hall and Carroll , Ruppert et al. , Härdle and Tsybakov , Fan and Yao , Wang et al. . In particular, when , and and in (1) are - and -Hölder smooth, respectively, Wang et al.  established a minimax rate of the order under both local and global squared risks. Here, for two real numbers , .

In contrast, our study focuses on the case where are i.i.d. random design points on the real line. For this, we show that when and in (1) are - and -Hölder smooth, respectively, the minimax rate of estimating is of the order under both local and global squared risks. This result has several noteworthy implications:

• The minimax rates in random and fixed design settings share a common component, , as well as the same transition boundary .

• For , a faster rate is achievable with a random design.

• Unlike the fixed design setting, for , and are now entangled in the minimax rate in the random design case.

We now discuss in more detail this minimax rate. The upper bound of the minimax rate is achieved by an estimator that combines local polynomial regression with pairwise differences, the latter of which is formulated via U-statistics. Our analysis of this estimator hence relies on the four-term Bernstein inequality in Giné et al. , and unlike classic kernel methods, requires no smoothness assumption on the design density.

For the lower bound, due to the entanglement of and in the minimax rate and the additional randomness of , the derivation is much more involved than its counterpart in the fixed design setting. We tackle the first difficulty of entangled and via a proper localization technique in the construction of the mean function , depicted in Figure 2 in Section 3.2. The second difficulty caused by the randomness of is resolved with a new trapezoid-shaped construction of the mean , aided by a result due to Kolchin et al.  on the sparse multinomial distribution. This result helps characterize the asymptotic behavior of the locations of and plays a key role in our proof, but to our knowledge has not been well used in the nonparametric statistics literature.

In the special case of constant variance, (1) is reduced to

 Yi=f(Xi)+σεi,i=1,2,…,n, (2)

and the goal becomes estimation of . In this case, the problem is linked to estimation of a quadratic functional, which has been studied in depth in the other two benchmark nonparametric models, the density model [Bickel and Ritov, 1988; Laurent, 1996; Giné and Nickl, 2008] and the white noise model [Donoho and Nussbaum, 1990; Fan, 1991; Laurent and Massart, 2000]. In the density model, one observes an i.i.d. univariate sequence from some unknown density , and the goal is to estimate . In the white noise model, one observes a continuous-time process from for with a standard Wiener process. The goal is to estimate . Under an -smoothness condition on , the minimax rate in both of the aforementioned two cases is (cf. Theorem 1(ii) and 2(ii) in Bickel and Ritov , Theorem 4 in Fan ).

Following Doksum and Samarov , a quadratic functional of interest under (2) is

 Q:=∫f2(x)pX(x)w(x)dx, (3)

where is the design density and is some weight function. Assuming in (2) that is -Hölder smooth, we show that the minimax rate of estimating and (when is unknown) is , thereby unifying the minimax rate of quadratic functional estimation in all three benchmark nonparametric models.

In this paper, we also provide extensions of (2) to multivariate cases, with a focus on the multivariate nonparametric regression model

 Yi =f(Xi)+σεi,i=1,2,…,n, (4)

 Yi =d∑k=1fk(Xi,k)+σεi,i=1,2,…,n, (5)

in both fixed and random designs. Here, , , for some fixed positive integer . Details are given in Sections 4.1 and 4.2.

We summarize the minimax rates in all of the aforementioned models in Table 1.

The rest of the paper is organized as follows. Section 2 presents the simple model (2) with constant variance. Section 3 discusses its heteroscedastic extension (1). Section 4 discusses the multivariate nonparametric regression model (4), the additive model (5), and connections of our study to quadratic functional estimation as well as to variance estimation in the linear model. The essential lower bound proof of the minimax rate under model (2) is presented in Section 5, with the rest of the proofs given in a supplement.

The notation used throughout the paper is as follows. For any positive integer , denotes the set . For any real number , we use to denote the smallest integer greater than or equal to . For any positive integer ,

denotes the zero vector of dimension

and

denotes the identity matrix of dimension

. For a real vector , denotes its infinity norm. For a real matrix , denotes its spectral norm and denotes its determinant. For an -times differentiable function with some positive integer , we use to denote its th derivative for

. For identically distributed random variables

and , we use and to denote the distribution and density of , to denote , and to denote the density of . Similar notation applies to identically distributed random vectors and . For a positive integer and , stands for the

-dimensional normal distribution with mean

and covariance . We will drop the subscript for simplicity when . and

represent the standard normal distribution and density. For two probability measures

defined on a common space , denotes their total variation distance, that is, . For two real positive sequences and , if for some absolute constant . We say if and .

## 2 Homoscedastic case

To illustrate some of the main ideas developed in this paper, we begin with a discussion of the elementary univariate homoscedastic nonparametric regression model (2):

 Yi=f(Xi)+σεi,i=1,2,…,n.

Here, are i.i.d. copies of a univariate random variable , belongs to an -Hölder class that will be specified soon, and are i.i.d. copies of a variable with zero mean and unit variance and are independent of . Both the mean function and the distribution of are assumed unknown.

Model (2) has been extensively studied using residual-based and difference-based methods; see, among many others, Von Neumann , Von Neumann , Rice , Gasser et al. , Hall et al. , Hall and Marron , Thompson et al. , Wang et al. . A related functional estimation problem has also been studied in semiparametric models [Robins et al., 2008, 2009]. Most of the previous studies focus on the case of fixed design, especially the equidistant design with , , for which the minimax rate of estimating under an -Hölder smoothness constraint on is known to be (cf. Theorems 1 and 2 in Wang et al. ).

In detail, let be a fixed (possibly infinite) interval on the real line. Define the Hölder class on as follows:

 Λα,I(C):={f: (6)

where is the largest integer strictly smaller than and . Denote the support of as .

Define the joint distribution class

(where “cv” stands for “constant variance”) with the following standard conditions:

• satisfies .

• has density and there exists a fixed positive constant such that

 supx∈RpX(x)≤C0.
• There exist two fixed constants and such that for any , there exists a set such that

 λ(Uδ)≥c0\rm and infu∈Uδp˜Xij(uδ)≥c0,

where represents the Lebesgue measure on the real line, and .

• for some fixed positive constant .

The lower bound in Condition (c) above is posed on since the estimator constructed below solely requires a sufficient number of close pairs. In addition, Condition (c) essentially requires the density to be “dense” around , and is strictly weaker than a uniform lower bound of over a fixed neighborhood of . Moreover, no smoothness condition is placed on the density of .

The rest of the section is devoted to proving the following minimax rate:

 inf˜σ2supf∈Λα,I(Cf)supσ2≤CσsupP(X,ε)∈P\rm cv,(X,ε)E(˜σ2−σ2)2≍n−8α/(4α+1)∨n−1, (7)

where denotes the joint distribution of , and ranges over all estimators of .

### 2.1 Upper bound

The upper bound is achieved by a difference estimator based on U-statistics (with convention ):

 ˆσ2:=(n2)−1∑i

Here, , where is a bandwidth parameter satisfying as , and is a symmetric density kernel supported on that satisfies

 M–––K≤inf|u|≤1K(u)≤sup|u|≤1K(u)≤¯¯¯¯¯¯MK (9)

for two fixed constants and ; one example is the box kernel which satisfies (9) with .

The following error bound is derived via the exponential inequality for degenerate U-statistics due to Giné et al. .

###### Theorem 1.

Suppose the kernel in is chosen such that (9) is satisfied with constants and , and the bandwidth is chosen as

 hn≍{n−2/(4α+1),0<α<1/4,n−1,α≥1/4. (10)

Then, under (2) with random design, it holds that

 supf∈Λα,I(Cf)supσ2≤CσsupP(X,ε)∈P\rm cv,(X,ε)E(ˆσ2−σ2)2≤C(n−8α/(4α+1)∨n−1),

where is some fixed positive constant that only depends on , and in .

### 2.2 Lower bound

The derivation of the lower bound in (7) is much more involved. In particular, the construction in the fixed design setting (cf. Theorem 2 in Wang et al. ) cannot be extended to the random design case, since the spike-type construction of located at each deterministic design point leads to a sub-optimal rate in the random design setting. To achieve a sharp rate, we have to exploit the randomness of ; this requires us to handle a highly convoluted alternative hypothesis that no longer leads to a product measure of given each realization of in LeCam’s two-point method. This calls for a careful analysis of the locations of .

We now sketch a proof of the component in (7) for , with a particular emphasis on where the difference arises with the fixed design setting. The proof can be roughly divided into two steps. In the first step, we construct a two-point testing problem with the null being a Gaussian () and the alternative a Gaussian location mixture (). In the second step, we approximate the Gaussian location mixture () by a location mixture with compact support (), which, unlike the alternative in the first step, belongs to the considered model class.

We start by introducing the construction of , , , and under the null and the alternative in the first step. For each , let

 hn≍n−2/(4α+1),θ2n≍h2αn,\rm and N:=1/(6hn),

and divide the unit interval into intervals of length , with large enough and chosen such that is a positive integer.

• Choice of : Under , let . Under , let be a piecewise-trapezoidal function on the intervals. That is, for each , takes on a value of on the intervals and then linearly decreases to zero on the two endpoints and , with i.i.d. standard normal variables.

• Choice of : Under , let . Under , let .

• Choice of : Under both and , let .

• Choice of : Under both and , let

be uniformly distributed over the union of the upper bases of the trapezoids, that is, over

.

See Figure 1 for an illustration of the construction. Figure 1: The black solid line represents the construction of f(⋅) under the alternative hypothesis ˜H1. The thick red segments indicate the support of X under both H0 and ˜H1, on which X is uniformly distributed. Here, hn≍n−2/(4α+1) and is chosen such that N:=1/(6hn) is a positive integer. {˜ri}Ni=1 are N i.i.d. standard normal variables.

In contrast to the spike-type construction of in the fixed design setting, our construction is trapezoid-shaped, which guarantees a maximal variation in the mean to compensate for the difference in the variance under the null and alternative. This is unnecessary in the fixed design setting since the point of maximal variation in the mean (center of each spike) can be directly placed at each fixed , resulting in evenly spaced spikes as the construction of .

Under the above construction, conditioning on , are distributed as

 H0:P0({Yi}ni=1∣{Xi}ni=1)=n∏i=1N(0,1+θ2n)

and

 ˜H1:˜P1({Yi}ni=1∣{Xi}ni=1)=N∏j=1∫⎛⎝∏{i:bi=j}N(hαnv,1)⎞⎠φ(v)dv,

where is the location index sequence of defined as

 bi:=j   if  Xi∈[(6j−5)hn,(6j−1)hn].

Denote the joint distribution of under and by and , respectively. With the aid of Lemma 2 that will be stated in Section 5, one can then upper bound

which is smaller than a sufficiently small constant by choosing sufficiently small.

The second step of the proof aims to find a sequence of compactly supported variables to replace the standard normal sequence in , so that for each realization of , the corresponding in the alternative is -Hölder smooth with a fixed constant. Then, denoting the distribution of such as , one wishes to approximate the conditional distribution in by

 P1({Yi}ni=1∣{Xi}ni=1)=N∏j=1∫⎛⎝∏{i:bi=j}N(hαnv,1)⎞⎠G(dv)

in

. Even with the aid of moment matching techniques already established in the literature, upper bounding

is still nontrivial. Specifically, unlike in the fixed design setting, now with high probability the conditional distribution of given is no longer a product measure. This is because multiple ’s could fall into the same trapezoid in the construction of . This can be handled relatively easily in the first step since there we only have to analyze the pairwise correlation of and depending on whether and fall into the same trapezoid, but it is much less tractable in the second step. More specifically, in order to match moments, we now have to divide the ’s into groups based on their memberships among the trapezoids, which naturally requires us to monitor the locations of , and in particular the number of ’s that fall into the same trapezoid. This is possible by observing that the memberships of now follow a sparse multinomial distribution ( bins, balls) so that a result in Kolchin et al.  can be applied. This allows us to show that with high probability the maximum number of ’s in each trapezoid is bounded by a fixed constant, which, along with Lemma 1 in Section 5, allows us to calculate

 TV(P1,˜P1)≲nθ2pn

for . This indicates that is smaller than some sufficiently small constant . Then, by the triangle inequality,

 TV(P0,P1)≤TV(P0,˜P1)+TV(P1,˜P1)≤2c.

Details of the above derivation will be given in Section 5. The resulting lower bound is as follows.

###### Theorem 2.

Under (2) with random design, it holds that

 inf˜σ2supf∈Λα,I(Cf)supσ2≤CσsupP(X,ε)∈P\rm cv,(X,ε)E(˜σ2−σ2)2≥c(n−8α/(4α+1)∨n−1/2),

where is some fixed positive constant that only depends on and in , and ranges over all estimators of .

## 3 Heteroscedastic case

We now study the heteroscedastic model (1),

 Yi=f(Xi)+V1/2(Xi)εi,i=1,2,…,n,

where are i.i.d. copies of on the real line, and are - and -Hölder smooth on the fixed (possibly infinite) interval , respectively, and are i.i.d. copies of with zero mean and unit variance and are independent of . As in Section 2, smoothness indices and are assumed known, while , and the distribution of are unknown. For any estimator , the estimation accuracy is measured both locally via

 R1(˜V,V;x∗):=(˜V(x∗)−V(x∗))2 (11)

at a point in the support of , , and globally via

 R2(˜V,V):=∫(˜V(x)−V(x))2PX(dx) (12)

with the distribution of .

Model (1) has been studied in, for example, Muller and Stadtmuller , Hall and Carroll , Ruppert et al. , Härdle and Tsybakov , Fan and Yao , Wang et al. , with a focus on the fixed design case. Theorems 1 and 2 in Wang et al.  established a minimax rate of the order under equidistance design , when and are - and -Hölder smooth on [0,1].

Define (where “vf” stands for “variance function”) with the following conditions:

• satisfies .

• has density , and there exists a fixed positive constant such that

 supx∈RpX(x)≤C0.
• There exist fixed positive constants and such that

 infx∗∈supp(X)pX(x∗)≥c0\rm % and inf0<δ<δ0infx∗∈supp(X)λ({u∈[−1,1]:x∗+δu∈supp(X)})≥c0,

where is the Lebesgue measure on the real line.

• for some fixed positive constant .

One can readily verify that , with the latter defined in the beginning of Section 2. Compared to , Condition (c) in is posed on the marginal density and support of , since in the variance function case we require a sufficient number of close pairs around each target . We also note that, as in , no smoothness assumption is posed on the design density in .

The rest of the section is devoted to proving the following minimax rates

 inf˜Vsupf∈Λα,I(Cf)supV∈Λβ,I(CV)supP(X,ε)∈P\rm vf,(X,ε)supx∗∈supp(X)ER1(˜V,V;x∗) ≍n−8αβ4αβ+2α+β∨n−2β2β+1, (13) inf˜Vsupf∈Λα,I(Cf)supV∈Λβ,I(CV)supP(X,ε)∈P\rm vf,(X,ε)ER2(˜V,V) ≍n−8αβ4αβ+2α+β∨n−2β2β+1,

where denotes the joint distribution of , and ranges over all estimators of .

### 3.1 Upper bound

We now propose an estimator of for some fixed by combining pairwise differences with local polynomial regression. We first introduce some notation. Let be the largest integer strictly smaller than and

 q(u):=(1,u,u2/2!,…,uℓ/ℓ!)⊤.

For any , define

 Dij:=(Yi−Yj)2/2, Xij:=(Xi+Xj)/2, and Kij:=Kh1(Xi−Xj)Kh2(Xij−x∗),

where are two bandwidths. Define an matrix

 Bn:=(n2)−1∑i

and as its adjugate such that . For example, when , we have

 Bn=[s0s1s1s2],B∗n=[s2−s1−s1s0],\rm and |Bn|=s0s2−s21,

where

 sk:=(n2)−1∑i

Following Fan , we propose the following robust local polynomial estimator:

 ˆV\rm LP(x∗):=(n2)−1∑i

where is some sufficiently small positive constant that decays to polynomially with . Let

 wij:=(n2)−1q⊤(0)B∗nq(Xij−x∗h2)Kijand˜wij:=wij/(|Bn|+τn).

Then, it holds that , , and

 ∑i

The last property (15) is referred to as the reproducing property of local polynomial estimators (cf. Proposition 1.12 in Tsybakov ).

###### Theorem 3.

Suppose the kernel in is chosen such that (9) holds with constants and , for some fixed constant , and the bandwidths are chosen as

 (h1,h2)≍⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩(n−2β4αβ+β+2α,n−4α4αβ+β+2α),0<α<β4β+2,(n−1,n−12β+1),α≥β4β+2. (16)

Then, under (1) with random design, it holds that

 supf∈Λα,I(Cf)supV∈Λβ,I(CV)supP(X,ε)∈P\rm vf,(X,ε)supx∗∈supp(X)ER1(ˆV\rm LP,V;x∗)≤C(n−8αβ4αβ+β+2α∨n−2β2β+1)

and

 supf∈Λα,I(Cf)supV∈Λβ,I(CV)supP(X,ε)∈P\rm vf,(X,ε)ER2(ˆV\rm LP,V)≤C(n−8αβ4αβ+β+2α∨n−2β2β+1),

where is some fixed positive constant that only depends on and in