Statistical Optimality of Interpolated Nearest Neighbor Algorithms

10/05/2018
by   Yue Xing, et al.
0

In the era of deep learning, understanding over-fitting phenomenon becomes increasingly important. It is observed that carefully designed deep neural networks achieve small testing error even the training error is close to zero. One possible explanation is that for many modern machine learning algorithms, over-fitting can greatly reduces the estimation bias, while not increases the estimation variance too much. To illustrate the above idea, we prove that our interpolated nearest neighbors algorithm achieves the minimax optimal rate in both regression and classification regimes, and observe that they are empirically better than the traditional k nearest neighbor method in some cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset