Introduction
The nearest neighbor (NN) regression algorithm is a classical approach to nonparametric regression. The value of the functional is taken to be the unweighted average observation of the closest samples. Although this procedure has been known for a long time and has a deep practical significance, there is still surprisingly much about its convergence properties yet to be understood.
We derive finitesample high probability uniform bounds for NN regression under a standard additive model where is an unknown function,
is subGaussian white noise and
is the noisy observation. The samples are drawn i.i.d. as follows: is drawn according to an unknown density , which shares the same support as , and then observation is generated by the additive model based on .We then give simple procedures to estimate the level sets and global maximas of a function given noisy observations and apply the NN regression bounds to establish new Hausdorff recovery guarantees for these structures. Each of these results are interesting on their own.
The bulk of the work on NN regression convergence theory is on its properties under various risk measures or asymptotic convergence. Notions of consistency involving risk measures such as mean squared error are considerably weaker than the supnorm as the latter imposes a uniform guarantee on the error where is the NN regression estimate of function . Existing work on studying under the supnorm thus far are asymptotic. We give the first supnorm finitesample result. This result matches the minimax optimal rate up to logarithmic factors.
We then discuss the setting where the data lies on a lower dimensional manifold. It is already known that NN regression is able to automatically adapt to the intrinsic dimension under various risk measures: the rates depend only on the intrinsic dimension and independent of ambient dimension. We show that this is also the case in the supnorm: we attain finitesample bounds as if we were operating in the lower intrinsic dimension space without any modifications to the procedure.
We then show the utility of our NN regression results in recovering certain structures of an arbitrary function, namely the levelsets and global maximas. The motivation can be traced back to the rich theory of densitybased clustering. There, one is given a finite sample from a probability density . The clusters can then be modeled based on certain structures in the underlying density . Such structures include the levelsets for some density level or the local maximas of . Then to estimate these, one typically uses a plugin approach using a density estimator (e.g. for levelsets, and for modes, ). It turns out that given uniform bounds on , we can estimate these structures with strong guarantees.
In this paper, instead of estimating these structures in a density, we estimate these structures for a general function . This is possible because of our established finitesample supnorm bounds for nonparametric regression. There are however some key differences in our setting. In the density setting, one has access to i.i.d. samples drawn from the density. Here, we have an i.i.d. sample drawn from some density not necessarily related to , and then we obtain a noisy observation of the value . This can be viewed as a noisy observation of the feature of . In other words, we estimate the stuctures based on the features of data, while in the density setting, there are no features and the structures are instead based on the dense regions of the dataset.
Related Works and Contributions
NN Regression Rates
The consistency properties of NN regression have been studied for a long time and we highlight some of the work here. Biau et al. (2010) give guarantees under risk. Devroye et al. (1994) give consistency guarantees under the risk. Stone (1977) provides results under for . All these notions of consistency so far are under some integrated risk, and thus are weaker than the supnorm (i.e. ), which imposes a uniform guarantee.
A number of works such as Mack and Silverman (1982); Cheng (1984); Devroye (1978); Lian et al. (2011); Kudraszow and Vieu (2013) give strong uniform convergence rates. However, these results are asymptotic. Our bounds explore the finitesample consistency properties of NN regression, which we will demonstrate later can show strong results about NN based learning algorithms which were not possible with existing results. To the best of our knowledge, this is the first such finitesample uniform consistency result for this procedure, which matches the minimax rate up to logarithmic factors.
We then extend our results to the setting where the data lies on a lower dimensional manifold. This is of practical interest because the curse of dimensionality forces nonparametric methods such as
NN to require an exponentialindimension sample complexity; however as a concession, we can show that many of these methods can have sample complexity depending on the intrinsic dimension (e.g. doubling dimension, manifold dimension, covering number) and independent of the ambient dimension. In modern data applications where the dimension can be arbitrarily high, oftentimes the number of degrees of freedom remains much lower. It thus becomes important to understand these methods under this setting.
Kulkarni and Posner (1995) give results for NN regression based on the covering numbers of the support of the distribution. Kpotufe (2011) shows that NN regression actually adapts to the local intrinsic dimension without any modifications to the procedure or data in the norm. In this paper, we show that this holds in the supnorm as well for a global intrinsic dimension.
Level Set Estimation
Density levelset estimation has been extensively studied and has significant implications to densitybased clustering. Some works include Tsybakov et al. (1997); Singh et al. (2009).It involves estimating given a finite i.i.d. sample from , where is some known density level and is the unknown density. can be seen as the high density regions of the data and thus the connected components can be used as the coresets in clustering. It can be shown that given a density estimator with guarantees on , then taking , the Hausdorff distance between and can also be bounded.
In this paper, we extend this idea to functions which are not necessarily densities given noisy observations of . We obtain similar results to those familiar in the density setting, which are made possible by our established bounds for estimating . An advantage of this approach is that it can be applied to clustering where there are features where clusters are defined as regions of similar feature value rather than similar density. In densitybased clustering, it is typical that one does not assume access to the features and thus such procedures fail to readily take advantage of the features when performing clustering. A similar approach was taken by Willett and Nowak (2007) by using nonparametric regression to estimate the level sets of a function; however our consistency results are instead under the Hausdorff metric.
Global Maxima Estimation
We next give an interesting result for estimating the global maxima of a function. Given i.i.d. samples from some distribution on the input space and seeing a noisy observations of at the samples, we show a guarantee on the distance between the sample point with the highest NN regression value and the (unique) point which maximizes . This gives us insight into how well a grid search or randomized search can estimate the maximum of a function.
This result can be compared to mode estimation in the density setting where the object is to find the point which maximizes the density function Tsybakov (1990). Dasgupta and Kpotufe (2014) show that given draws from a density, the sample point which maximizes the NN density estimator is close to the true maximizer of the density; moreover they give finitesample rates. Earlier works such as Romano (1988) provide asymptotic rates.
NN Regression
Throughout the paper, we assume a function with compact support and that we have datapoints drawn follows. The ’s are drawn i.i.d. from density with support . Then where
are i.i.d. drawn according to random variable
.Definition 1.
where is compact.
The first regularity assumption ensures that the support does not become arbitrarily thin anywhere. Otherwise, it becomes impossible to estimate the function in such areas from a random sample.
Assumption 1 (Support Regularity).
There exists and such that for all and .
The next assumption ensures that with a sufficiently large sample, we will obtain a good covering of the input space.
Assumption 2 ( bounded from below).
.
Finally, we have a standard subGaussian white noise assumption in our additive model.
Assumption 3 (SubGaussian White noise).
satisfies and subGaussian with parameter (i.e. for all ).
Then define NN regression as follows.
Definition 2 (Nn).
Let the NN radius of be where and the NN set of be . Then for all , the NN regression function with respect to the samples is defined as
Next, we define the following pointwise modulus of continuity, which will be used to express the bias for an arbitrary function in later result.
Definition 3 (Modulus of continuity).
.
We now state our main result about NN regression. Informally, it says that under the mild assumptions described above, for , uniformly in with high probability.
The first term correponds to the bias term. Using uniform VCtype concentration bounds, it can be shown that the NN radius can be uniformly bounded by approximately distance and hence no point in the NN set will be that far. The bias can then be expressed in terms of that distance and .
The second term corresponds to the variance. The
factor is not surprising since the noise terms are averaged over observations and the extra factor comes from the cost of obtaining a uniform bound.Definition 4.
Let be the volume of a dimensional unit ball.
Theorem 1 (NN Regression Rate).
Note that the above result is fairly general and makes no smoothness assumptions. In particular, need not even be continuous. It is also important to point out that must be sufficiently large in order for there to exist a that satisfies the conditions. We can then apply this to the class of Hölder continuous functions to obtain the following result.
Corollary 1 (Rate for Hölder continuous functions).
Remark 1.
Taking gives us a rate of
which is the minimax optimal rate for estimating a Hölder function, up to logarithmic factors.
Remark 2.
It is understood that all our results will also hold under the assumption that the ’s are fixed and deterministic (e.g. on a grid) as long as there is a sufficient covering of the space.
Regression On Manifolds
In this section, we show that if the data has a lower intrinsic dimension, then NN will automatically attain rates as if it were in the lower dimensional space and independent of the ambient dimension.
We make the following regularity assumptions which are standard among works in manifold learning e.g. (Genovese et al., 2012; Balakrishnan et al., 2013).
Assumption 4.
is supported on where:

is a dimensional smooth compact Riemannian manifold without boundary embedded in compact subset .

The volume of is bounded above by a constant.

has condition number , which controls the curvature and prevents selfintersection.
Let be the density of with respect to the uniform measure on .
Theorem 2 (NN Regression Rate).
Similar to the full dimensional case, we can then apply this to the class of Hölder continuous functions.
Corollary 2 (Rate for Hölder continuous functions).
Remark 3.
Taking gives us a rate of , which is more attractive than the full dimensional version when intrinsic dimension is lower than ambient dimension . We note that the bound contains a constant factor depending on but the rate at which it decreases as grows does not.
Level Set Estimation
The level set is the region of the input space that have value greater than a fixed threshold.
Definition 5 (LevelSet).
In order to estimate the levelsets, we require the following regularity assumption. It states that for each maximal connected component of the levelset, the change in the function around the boundary has a Lipschitz form with smoothness and curvature around some neighborhood of the boundary. This notion of regularity at the boundaries of the levelsets is a standard one in density levelset estimation e.g. Tsybakov et al. (1997); Singh et al. (2009).
Definition 6 (LevelSet Regularity).
Let , be the boundary of , and . A function satisfies regularity at level if the following holds. There exists such that for each maximal connected subset , we have
for all .
Remark 4.
The upper bound on ensures that is sufficiently smooth so that NN regression will give us sufficiently accurate estimates near the boundaries. The lower bound on ensures that the levelset is salient enough to be detected.
To recover based on the samples, we use the following estimator, where .
where and . It will become clear later in the proofs that is meant to be an upper bound on and thus is an upper bound on twice the variance of term of the NN bound.
There are three simple but key differences of our estimator when compared to . The first is that since we don’t have access to the true function , we use the NN regression estimate . Next, instead of taking , we instead restrict to the samples . This makes our estimator feasible to compute since it will be a subset of the sample points. Finally, we have the to bound the uniform deviation of near the boundary of the levelset (as will be apparent in the proof). The main difficulty is choosing large enough to bound this uniform deviation, but not too large to overestimate the levelset and finally ensuring that can be computed without knowledge of or any unknown constants (we only need confidence parameter and the dimension, as well as ). Thus, our estimator is practical.
We provide consistency result under the Hausdorff metric. We note that this is a strong notion of consistency since it a uniform guarantee on the constituents of our estimator.
Definition 7 (Hausdorff Distance).
The next result gives us finitesample consistency rates for our estimator.
Theorem 3 (Level Set Recovery).
Remark 5.
Although the statement may appear obfuscated, it essentially says that as long as is a continuous function satisfying regularity at level , then if lies within the following range:
then with high probability,
Remark 6.
Choosing at the optimal setting , we have . Then it follows that we recover the level sets at a Hausdorff rate of . This can be compared to the lower bound established by Tsybakov et al. (1997) for estimating the level sets of an unknown density.
We can give a similar result when the data lies on a lower dimensional manifold. Interestingly, we can use the exact same estimator as before as if we were operating in the full dimensional space.
Theorem 4 (Level Set Recovery on Manifolds).
Remark 7.
The main difference from the fulldimensional version is that we need to satisfy
Choosing at the optimal setting , we recover the level sets at a rate of .
Remarkably, we obtain the rate as if we were operating on the lower dimensional space. This has not been shown for levelset estimation on manifolds for density functions (which is a different problem).
The rate for density functions under similar regularity assumptions is Jiang (2017), which is slower. In other words, we escape the curse of dimensionality with regression levelset estimation but do not escape it for density levelset estimation.
Global Maxima Estimation
In this section, we give guarantees on estimating the global maxima of .
Definition 8.
is a maxima of if for all for some .
We then make the following assumptions, which states that has a unique maxima, where it has a negativedefinite Hessian.
Assumption 5.
has a unique maxima and has a negativedefinite Hessian at .
These assumptions lead to the following, which states that has quadratic smoothness and decay around .
Lemma 1 (Dasgupta and Kpotufe (2014)).
Let satisfy Assumption 5. Then there exists such that the following holds.
for all where is a connected component of and contains .
We utilize the following estimator, which is the maximizer of amongst sample points .
We next give the result of the accuracy of in estimating .
Theorem 5.
Remark 8.
Taking optimizes the above expression so that . This can be compared to the minimax rate for mode estimation established by Tsybakov (1990). We stress however that estimating the mode of density function is a different problem.
Remark 9.
An analogue for global minima also holds. Moreover, in the manifold setting, we can obtain a rate of , which has not been shown for mode estimation in densities.
Proofs
Proof of Theorem 1
The follow bounds uniformly in .
Lemma 2.
The following holds with probability at least . If
then .
Proof.
Let . We have . By Lemma 7 of Chaudhuri and Dasgupta (2010) and the condition on , it follows that with probability , uniformly in , . Hence, and the result follows immediately. ∎
The next result bounds the number of distinct NN sets over .
Lemma 3.
Let be the number of distinct NN sets over , that is, . Then .
Proof.
First, let be the partitioning of induced by the hyperplanes defined as the perpendicular bisectors of each pair of points , for . Let us denote this set of hyperplanes as . We have that if are in the same partition of , then . If not, then any path from to must cross some perpendicular bisector in , which would be a contradiction. Thus, .
Now we will bound . Since
is finite, choose vectors
such that they form an orthogonal basis of and none of these vectors are perpendicular to any . Let induce hyperplanes , respectively (i.e. being the orthogonal complement of ). Without loss of generality, orient the space such that is the vertical direction (i.e. so that we can use descriptions such as ’above’ and ’below’). For each region in that is bounded below, associate such a region to its lowest point. Then it follows that there are at most of these regions since they are the intersection of hyperplanes.We next count the regions unbounded below. Place below the lowest point corresponding the regions in that were bounded below. Then we have that the regions unbounded below are . It thus remains now to count .
We now orient the space so that corresponds to the vertical direction. Then we can repeat the same procedure and for each region in that is bounded below with the lowest point. There are at most since they are an intersection of hyperplanes in along with , and then placing sufficiently low, the remaining regions correspond to .
Continuing this process, it follows that when we orient to be the vertical direction, in order to count , the number of regions in bounded below is at most and the remaining ones are correspond to .
It thus follows that , as desired. ∎
Proof of Theorem 1.
We have
The first term can be viewed as the bias term and the second can be viewed as variance term.
By Lemma 2, we can bound the first term as follows with probability at least uniformly in : . For the variance term, we have by Hoeffding’s inequality that if then .
Taking , then we have .
By Lemma 3 and union bound, it follows that . Hence, we have with probability at least ,
uniformly in . ∎
It is easy to see that a simple modification to the proof of Theorem 1 will yield the following.
Proof of Theorem 2
We need the following guarantee on the volume of the intersection of a Euclidean ball and ; this is required to get a handle on the true mass of the ball under in later arguments. The proof can be found in Jiang (2017).
Lemma 4 (Ball Volume).
If , and then
where is the volume w.r.t. the uniform measure on .
The next is the manifold analogue of Lemma 2.
Lemma 5.
Proof.
Let . We have
By Lemma 7 of Chaudhuri and Dasgupta (2010) and the condition on , it follows that with probability , uniformly in , . Hence, and the result follows immediately. ∎
Theorem 2 now follows by replacing the usage of Lemma 2 with Lemma 5. We also note that an analogous result to Corollary 4 can also be established.
It is easy to see that a simple modification to the proof of Theorem 1 will yield the following.
Proofs of Theorem 3 and 4
Proof of Theorem 3.
We have that . Thus, when is sufficiently large depending on , , and , we have by Bernsteintype concentration inequalities that with probability at least , .
Let and let us use the notation introduced in Corollary 4. It suffices to show that (1) and (2) . We begin with (1). We have
where the first inequality holds by Corollary 4, the secondtolast inequality holds by regularity and that , and the last inequality holds by the conditions on (which in particular imply and ). Thus, if , then . Therefore, , which establishes (1).
We now show (2). Let . Since , it suffices to show that . For any , we have
where the last inequality holds by the conditions on . Hence, by Lemma 7 of Chaudhuri and Dasgupta (2010), we have . Thus, for any , there exists a sample point in
Comments
There are no comments yet.