Rates of Uniform Consistency for k-NN Regression

07/19/2017 ∙ by Heinrich Jiang, et al. ∙ 0

We derive high-probability finite-sample uniform rates of consistency for k-NN regression that are optimal up to logarithmic factors under mild assumptions. We moreover show that k-NN regression adapts to an unknown lower intrinsic dimension automatically. We then apply the k-NN regression rates to establish new results about estimating the level sets and global maxima of a function from noisy observations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

The -nearest neighbor (-NN) regression algorithm is a classical approach to nonparametric regression. The value of the functional is taken to be the unweighted average observation of the closest samples. Although this procedure has been known for a long time and has a deep practical significance, there is still surprisingly much about its convergence properties yet to be understood.

We derive finite-sample high probability uniform bounds for -NN regression under a standard additive model where is an unknown function,

is sub-Gaussian white noise and

is the noisy observation. The samples are drawn i.i.d. as follows: is drawn according to an unknown density , which shares the same support as , and then observation is generated by the additive model based on .

We then give simple procedures to estimate the level sets and global maximas of a function given noisy observations and apply the -NN regression bounds to establish new Hausdorff recovery guarantees for these structures. Each of these results are interesting on their own.

The bulk of the work on -NN regression convergence theory is on its properties under various risk measures or asymptotic convergence. Notions of consistency involving risk measures such as mean squared error are considerably weaker than the sup-norm as the latter imposes a uniform guarantee on the error where is the -NN regression estimate of function . Existing work on studying under the sup-norm thus far are asymptotic. We give the first sup-norm finite-sample result. This result matches the minimax optimal rate up to logarithmic factors.

We then discuss the setting where the data lies on a lower dimensional manifold. It is already known that -NN regression is able to automatically adapt to the intrinsic dimension under various risk measures: the rates depend only on the intrinsic dimension and independent of ambient dimension. We show that this is also the case in the sup-norm: we attain finite-sample bounds as if we were operating in the lower intrinsic dimension space without any modifications to the procedure.

We then show the utility of our -NN regression results in recovering certain structures of an arbitrary function, namely the level-sets and global maximas. The motivation can be traced back to the rich theory of density-based clustering. There, one is given a finite sample from a probability density . The clusters can then be modeled based on certain structures in the underlying density . Such structures include the level-sets for some density level or the local maximas of . Then to estimate these, one typically uses a plug-in approach using a density estimator (e.g. for level-sets, and for modes, ). It turns out that given uniform bounds on , we can estimate these structures with strong guarantees.

In this paper, instead of estimating these structures in a density, we estimate these structures for a general function . This is possible because of our established finite-sample sup-norm bounds for nonparametric regression. There are however some key differences in our setting. In the density setting, one has access to i.i.d. samples drawn from the density. Here, we have an i.i.d. sample drawn from some density not necessarily related to , and then we obtain a noisy observation of the value . This can be viewed as a noisy observation of the feature of . In other words, we estimate the stuctures based on the features of data, while in the density setting, there are no features and the structures are instead based on the dense regions of the dataset.

Related Works and Contributions

-NN Regression Rates

The consistency properties of -NN regression have been studied for a long time and we highlight some of the work here. Biau et al. (2010) give guarantees under risk. Devroye et al. (1994) give consistency guarantees under the risk. Stone (1977) provides results under for . All these notions of consistency so far are under some integrated risk, and thus are weaker than the sup-norm (i.e. ), which imposes a uniform guarantee.

A number of works such as Mack and Silverman (1982); Cheng (1984); Devroye (1978); Lian et al. (2011); Kudraszow and Vieu (2013) give strong uniform convergence rates. However, these results are asymptotic. Our bounds explore the finite-sample consistency properties of -NN regression, which we will demonstrate later can show strong results about -NN based learning algorithms which were not possible with existing results. To the best of our knowledge, this is the first such finite-sample uniform consistency result for this procedure, which matches the minimax rate up to logarithmic factors.

We then extend our results to the setting where the data lies on a lower dimensional manifold. This is of practical interest because the curse of dimensionality forces nonparametric methods such as

-NN to require an exponential-in-dimension sample complexity; however as a concession, we can show that many of these methods can have sample complexity depending on the intrinsic dimension (e.g. doubling dimension, manifold dimension, covering number) and independent of the ambient dimension. In modern data applications where the dimension can be arbitrarily high, oftentimes the number of degrees of freedom remains much lower. It thus becomes important to understand these methods under this setting.

Kulkarni and Posner (1995) give results for -NN regression based on the covering numbers of the support of the distribution. Kpotufe (2011) shows that -NN regression actually adapts to the local intrinsic dimension without any modifications to the procedure or data in the norm. In this paper, we show that this holds in the sup-norm as well for a global intrinsic dimension.

Level Set Estimation

Density level-set estimation has been extensively studied and has significant implications to density-based clustering. Some works include Tsybakov et al. (1997); Singh et al. (2009).It involves estimating given a finite i.i.d. sample from , where is some known density level and is the unknown density. can be seen as the high density regions of the data and thus the connected components can be used as the core-sets in clustering. It can be shown that given a density estimator with guarantees on , then taking , the Hausdorff distance between and can also be bounded.

In this paper, we extend this idea to functions which are not necessarily densities given noisy observations of . We obtain similar results to those familiar in the density setting, which are made possible by our established bounds for estimating . An advantage of this approach is that it can be applied to clustering where there are features where clusters are defined as regions of similar feature value rather than similar density. In density-based clustering, it is typical that one does not assume access to the features and thus such procedures fail to readily take advantage of the features when performing clustering. A similar approach was taken by Willett and Nowak (2007) by using nonparametric regression to estimate the level sets of a function; however our consistency results are instead under the Hausdorff metric.

Global Maxima Estimation

We next give an interesting result for estimating the global maxima of a function. Given i.i.d. samples from some distribution on the input space and seeing a noisy observations of at the samples, we show a guarantee on the distance between the sample point with the highest -NN regression value and the (unique) point which maximizes . This gives us insight into how well a grid search or randomized search can estimate the maximum of a function.

This result can be compared to mode estimation in the density setting where the object is to find the point which maximizes the density function Tsybakov (1990). Dasgupta and Kpotufe (2014) show that given draws from a density, the sample point which maximizes the -NN density estimator is close to the true maximizer of the density; moreover they give finite-sample rates. Earlier works such as Romano (1988) provide asymptotic rates.

-NN Regression

Throughout the paper, we assume a function with compact support and that we have datapoints drawn follows. The ’s are drawn i.i.d. from density with support . Then where

are i.i.d. drawn according to random variable

.

Definition 1.

where is compact.

The first regularity assumption ensures that the support does not become arbitrarily thin anywhere. Otherwise, it becomes impossible to estimate the function in such areas from a random sample.

Assumption 1 (Support Regularity).

There exists and such that for all and .

The next assumption ensures that with a sufficiently large sample, we will obtain a good covering of the input space.

Assumption 2 ( bounded from below).

.

Finally, we have a standard sub-Gaussian white noise assumption in our additive model.

Assumption 3 (Sub-Gaussian White noise).

satisfies and sub-Gaussian with parameter (i.e. for all ).

Then define -NN regression as follows.

Definition 2 (-Nn).

Let the -NN radius of be where and the -NN set of be . Then for all , the -NN regression function with respect to the samples is defined as

Next, we define the following pointwise modulus of continuity, which will be used to express the bias for an arbitrary function in later result.

Definition 3 (Modulus of continuity).

.

We now state our main result about -NN regression. Informally, it says that under the mild assumptions described above, for , uniformly in with high probability.

The first term correponds to the bias term. Using uniform VC-type concentration bounds, it can be shown that the -NN radius can be uniformly bounded by approximately distance and hence no point in the -NN set will be that far. The bias can then be expressed in terms of that distance and .

The second term corresponds to the variance. The

factor is not surprising since the noise terms are averaged over observations and the extra factor comes from the cost of obtaining a uniform bound.

Definition 4.

Let be the volume of a -dimensional unit ball.

Theorem 1 (-NN Regression Rate).

Suppose that Assumptions 12, and 3 hold and that

Then probability at least , the following holds uniformly in .

Note that the above result is fairly general and makes no smoothness assumptions. In particular, need not even be continuous. It is also important to point out that must be sufficiently large in order for there to exist a that satisfies the conditions. We can then apply this to the class of Hölder continuous functions to obtain the following result.

Corollary 1 (Rate for -Hölder continuous functions).

Let . Suppose that Assumptions 12, and 3 hold and

If is Hölder continuous (i.e. ), then the following holds:

Remark 1.

Taking gives us a rate of

which is the minimax optimal rate for estimating a Hölder function, up to logarithmic factors.

Remark 2.

It is understood that all our results will also hold under the assumption that the ’s are fixed and deterministic (e.g. on a grid) as long as there is a sufficient covering of the space.

Regression On Manifolds

In this section, we show that if the data has a lower intrinsic dimension, then -NN will automatically attain rates as if it were in the lower dimensional space and independent of the ambient dimension.

We make the following regularity assumptions which are standard among works in manifold learning e.g. (Genovese et al., 2012; Balakrishnan et al., 2013).

Assumption 4.

is supported on where:

  • is a -dimensional smooth compact Riemannian manifold without boundary embedded in compact subset .

  • The volume of is bounded above by a constant.

  • has condition number , which controls the curvature and prevents self-intersection.

Let be the density of with respect to the uniform measure on .

We now give the manifold analogues of Theorem 1 and Corollary 1.

Theorem 2 (-NN Regression Rate).

Suppose that Assumptions 23, and 4 hold and that

Then with probability at least , the following holds uniformly in .

Similar to the full dimensional case, we can then apply this to the class of Hölder continuous functions.

Corollary 2 (Rate for -Hölder continuous functions).

Let . Suppose that Assumptions 23, and 4 hold and

If is Hölder continuous (i.e. ), then the following holds

Remark 3.

Taking gives us a rate of , which is more attractive than the full dimensional version when intrinsic dimension is lower than ambient dimension . We note that the bound contains a constant factor depending on but the rate at which it decreases as grows does not.

Level Set Estimation

The level set is the region of the input space that have value greater than a fixed threshold.

Definition 5 (Level-Set).

In order to estimate the level-sets, we require the following regularity assumption. It states that for each maximal connected component of the level-set, the change in the function around the boundary has a Lipschitz form with smoothness and curvature around some neighborhood of the boundary. This notion of regularity at the boundaries of the level-sets is a standard one in density level-set estimation e.g. Tsybakov et al. (1997); Singh et al. (2009).

Definition 6 (Level-Set Regularity).

Let , be the boundary of , and . A function satisfies -regularity at level if the following holds. There exists such that for each maximal connected subset , we have

for all .

Remark 4.

The upper bound on ensures that is sufficiently smooth so that -NN regression will give us sufficiently accurate estimates near the boundaries. The lower bound on ensures that the level-set is salient enough to be detected.

To recover based on the samples, we use the following estimator, where .

where and . It will become clear later in the proofs that is meant to be an upper bound on and thus is an upper bound on twice the variance of term of the -NN bound.

There are three simple but key differences of our estimator when compared to . The first is that since we don’t have access to the true function , we use the -NN regression estimate . Next, instead of taking , we instead restrict to the samples . This makes our estimator feasible to compute since it will be a subset of the sample points. Finally, we have the to bound the uniform deviation of near the boundary of the level-set (as will be apparent in the proof). The main difficulty is choosing large enough to bound this uniform deviation, but not too large to overestimate the level-set and finally ensuring that can be computed without knowledge of or any unknown constants (we only need confidence parameter and the dimension, as well as ). Thus, our estimator is practical.

We provide consistency result under the Hausdorff metric. We note that this is a strong notion of consistency since it a uniform guarantee on the constituents of our estimator.

Definition 7 (Hausdorff Distance).

The next result gives us finite-sample consistency rates for our estimator.

Theorem 3 (Level Set Recovery).

Suppose that Assumptions 12, and 3 hold. Let be continuous and satisfy -regularity at level . Define where the expectation is taken over and , and suppose that is sufficiently large depending on , and . If satisfies

then with probability at least ,

Remark 5.

Although the statement may appear obfuscated, it essentially says that as long as is a continuous function satisfying -regularity at level , then if lies within the following range:

then with high probability,

Remark 6.

Choosing at the optimal setting , we have . Then it follows that we recover the level sets at a Hausdorff rate of . This can be compared to the lower bound established by Tsybakov et al. (1997) for estimating the level sets of an unknown density.

We can give a similar result when the data lies on a lower dimensional manifold. Interestingly, we can use the exact same estimator as before as if we were operating in the full dimensional space.

Theorem 4 (Level Set Recovery on Manifolds).

Suppose that Assumptions 123, and 4 hold. Let be continuous and satisfy -regularity at level . Define where the expectation is taken over and , and suppose that is sufficiently large depending on , , , and . If satisfies

then with probability at least ,

Remark 7.

The main difference from the full-dimensional version is that we need to satisfy

Choosing at the optimal setting , we recover the level sets at a rate of .

Remarkably, we obtain the rate as if we were operating on the lower dimensional space. This has not been shown for level-set estimation on manifolds for density functions (which is a different problem).

The rate for density functions under similar regularity assumptions is Jiang (2017), which is slower. In other words, we escape the curse of dimensionality with regression level-set estimation but do not escape it for density level-set estimation.

Global Maxima Estimation

In this section, we give guarantees on estimating the global maxima of .

Definition 8.

is a maxima of if for all for some .

We then make the following assumptions, which states that has a unique maxima, where it has a negative-definite Hessian.

Assumption 5.

has a unique maxima and has a negative-definite Hessian at .

These assumptions lead to the following, which states that has quadratic smoothness and decay around .

Lemma 1 (Dasgupta and Kpotufe (2014)).

Let satisfy Assumption 5. Then there exists such that the following holds.

for all where is a connected component of and contains .

We utilize the following estimator, which is the maximizer of amongst sample points .

We next give the result of the accuracy of in estimating .

Theorem 5.

Suppose that is continuous and that Assumptions 123, and 5 hold. Let satisfy

Then the following holds with probability at least .

Remark 8.

Taking optimizes the above expression so that . This can be compared to the minimax rate for mode estimation established by Tsybakov (1990). We stress however that estimating the mode of density function is a different problem.

Remark 9.

An analogue for global minima also holds. Moreover, in the manifold setting, we can obtain a rate of , which has not been shown for mode estimation in densities.

Proofs

Proof of Theorem 1

The follow bounds uniformly in .

Lemma 2.

The following holds with probability at least . If

then .

Proof.

Let . We have . By Lemma 7 of Chaudhuri and Dasgupta (2010) and the condition on , it follows that with probability , uniformly in , . Hence, and the result follows immediately. ∎

The next result bounds the number of distinct -NN sets over .

Lemma 3.

Let be the number of distinct -NN sets over , that is, . Then .

Proof.

First, let be the partitioning of induced by the hyperplanes defined as the perpendicular bisectors of each pair of points , for . Let us denote this set of hyperplanes as . We have that if are in the same partition of , then . If not, then any path from to must cross some perpendicular bisector in , which would be a contradiction. Thus, .

Now we will bound . Since

is finite, choose vectors

such that they form an orthogonal basis of and none of these vectors are perpendicular to any . Let induce hyperplanes , respectively (i.e. being the orthogonal complement of ). Without loss of generality, orient the space such that is the vertical direction (i.e. so that we can use descriptions such as ’above’ and ’below’). For each region in that is bounded below, associate such a region to its lowest point. Then it follows that there are at most of these regions since they are the intersection of hyperplanes.

We next count the regions unbounded below. Place below the lowest point corresponding the regions in that were bounded below. Then we have that the regions unbounded below are . It thus remains now to count .

We now orient the space so that corresponds to the vertical direction. Then we can repeat the same procedure and for each region in that is bounded below with the lowest point. There are at most since they are an intersection of hyperplanes in along with , and then placing sufficiently low, the remaining regions correspond to .

Continuing this process, it follows that when we orient to be the vertical direction, in order to count , the number of regions in bounded below is at most and the remaining ones are correspond to .

It thus follows that , as desired. ∎

Proof of Theorem 1.

We have

The first term can be viewed as the bias term and the second can be viewed as variance term.

By Lemma 2, we can bound the first term as follows with probability at least uniformly in : . For the variance term, we have by Hoeffding’s inequality that if then .

Taking , then we have .

By Lemma 3 and union bound, it follows that . Hence, we have with probability at least ,

uniformly in . ∎

It is easy to see that a simple modification to the proof of Theorem 1 will yield the following.

Corollary 3 (-NN Regression Upper and Lower Bounds).

Let

Suppose that Assumptions 12, and 3 hold and that

Then probability at least , the following holds uniformly in .

Proof of Theorem 2

We need the following guarantee on the volume of the intersection of a Euclidean ball and ; this is required to get a handle on the true mass of the ball under in later arguments. The proof can be found in Jiang (2017).

Lemma 4 (Ball Volume).

If , and then

where is the volume w.r.t. the uniform measure on .

The next is the manifold analogue of Lemma 2.

Lemma 5.

Suppose that Assumptions 23, and 4 hold. The following holds with probability at least . If

then for all , .

Proof.

Let . We have

By Lemma 7 of Chaudhuri and Dasgupta (2010) and the condition on , it follows that with probability , uniformly in , . Hence, and the result follows immediately. ∎

Theorem 2 now follows by replacing the usage of Lemma 2 with Lemma 5. We also note that an analogous result to Corollary 4 can also be established.

It is easy to see that a simple modification to the proof of Theorem 1 will yield the following.

Corollary 4 (-NN Regression Upper and Lower Bounds).

Let

Suppose that Assumptions 12, and 3 hold and that

Then probability at least , the following holds uniformly in .

Proofs of Theorem 3 and 4

Proof of Theorem 3.

We have that . Thus, when is sufficiently large depending on , , and , we have by Bernstein-type concentration inequalities that with probability at least , .

Let and let us use the notation introduced in Corollary 4. It suffices to show that (1) and (2) . We begin with (1). We have

where the first inequality holds by Corollary 4, the second-to-last inequality holds by -regularity and that , and the last inequality holds by the conditions on (which in particular imply and ). Thus, if , then . Therefore, , which establishes (1).

We now show (2). Let . Since , it suffices to show that . For any , we have

where the last inequality holds by the conditions on . Hence, by Lemma 7 of Chaudhuri and Dasgupta (2010), we have . Thus, for any , there exists a sample point in