A Two-Stage Active Learning Algorithm for k-Nearest Neighbors

11/19/2022
by   Nick Rittler, et al.
0

We introduce a simple and intuitive two-stage active learning algorithm for the training of k-nearest neighbors classifiers. We provide consistency guarantees for a modified k-nearest neighbors classifier trained on samples acquired via our scheme, and show that when the conditional probability function ℙ(Y=y|X=x) is sufficiently smooth and the Tsybakov noise condition holds, our actively trained classifiers converge to the Bayes optimal classifier at a faster asymptotic rate than passively trained k-nearest neighbor classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/17/2022

Nearest Neighbor Classifier with Margin Penalty for Active Learning

As deep learning becomes the mainstream in the field of natural language...
research
05/26/2014

Stabilized Nearest Neighbor Classifier and Its Statistical Properties

The stability of statistical analysis is an important indicator for repr...
research
02/04/2021

Instance-based learning using the Half-Space Proximal Graph

The primary example of instance-based learning is the k-nearest neighbor...
research
10/15/2020

On Convergence of Nearest Neighbor Classifiers over Feature Transformations

The k-Nearest Neighbors (kNN) classifier is a fundamental non-parametric...
research
10/11/2019

Not All are Made Equal: Consistency of Weighted Averaging Estimators Under Active Learning

Active learning seeks to build the best possible model with a budget of ...
research
08/20/2019

Multi-hypothesis classifier

Accuracy is the most important parameter among few others which defines ...
research
12/07/2020

Certified Robustness of Nearest Neighbors against Data Poisoning Attacks

Data poisoning attacks aim to corrupt a machine learning model via modif...

Please sign up or login with your details

Forgot password? Click here to reset