DeepAI AI Chat
Log In Sign Up

Nearest Empirical Distribution: An Asymptotically Optimal Algorithm For Supervised Classification of Data Vectors with Independent Non-Identically Distributed Elements

08/01/2020
by   Farzad Shahrivari, et al.
0

In this paper, we propose a classifier for supervised classification of data vectors with mutually independent but non-identically distributed elements. For the proposed classifier, we derive an upper bound on the error probability and show that the error probability goes to zero as the length of the data vectors grows, even when there is only one training data vector per label available. As a result, the proposed classifier is asymptomatically optimal for this type of data vectors. Our numerical examples show that the performance of the proposed classifier outperforms conventional classification algorithms when the number of training data is small and the length of the data vectors is sufficiently high.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 16

page 18

page 23

06/23/2022

Universal Neyman-Pearson Classification with a Known Hypothesis

We propose a universal classifier for binary Neyman-Pearson classificati...
03/16/2015

High-dimensional quadratic classifiers in non-sparse settings

We consider high-dimensional quadratic classifiers in non-sparse setting...
10/31/2020

Optimal 1-NN Prototypes for Pathological Geometries

Using prototype methods to reduce the size of training datasets can dras...
10/09/2015

Some Theory For Practical Classifier Validation

We compare and contrast two approaches to validating a trained classifie...
02/22/2021

Probabilistic Learning on Manifolds (PLoM) with Partition

The probabilistic learning on manifolds (PLoM) introduced in 2016 has so...
10/27/2017

Probability Series Expansion Classifier that is Interpretable by Design

This work presents a new classifier that is specifically designed to be ...