An upper bound on prototype set size for condensed nearest neighbor

09/29/2013
by   Eric Christiansen, et al.
0

The condensed nearest neighbor (CNN) algorithm is a heuristic for reducing the number of prototypical points stored by a nearest neighbor classifier, while keeping the classification rule given by the reduced prototypical set consistent with the full set. I present an upper bound on the number of prototypical points accumulated by CNN. The bound originates in a bound on the number of times the decision rule is updated during training in the multiclass perceptron algorithm, and thus is independent of training set size.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2020

Social Distancing is Good for Points too!

The nearest-neighbor rule is a well-known classification technique that,...
research
08/09/2011

Uncertain Nearest Neighbor Classification

This work deals with the problem of classifying uncertain data. With thi...
research
04/01/2022

Estimating the Jacobian matrix of an unknown multivariate function from sample values by means of a neural network

We describe, implement and test a novel method for training neural netwo...
research
08/01/2019

A True O(n logn) Algorithm for the All-k-Nearest-Neighbors Problem

In this paper we examined an algorithm for the All-k-Nearest-Neighbor pr...
research
09/06/2013

Convergence of Nearest Neighbor Pattern Classification with Selective Sampling

In the panoply of pattern classification techniques, few enjoy the intui...
research
10/19/2018

Stochastic temporal data upscaling using the generalized k-nearest neighbor algorithm

Three methods of temporal data upscaling, which may collectively be call...
research
11/30/2016

Unit Commitment using Nearest Neighbor as a Short-Term Proxy

We devise the Unit Commitment Nearest Neighbor (UCNN) algorithm to be us...

Please sign up or login with your details

Forgot password? Click here to reset