DeepAI AI Chat
Log In Sign Up

Nearest Empirical Distribution: An Asymptotically Optimal Algorithm For Supervised Classification of Data Vectors with Independent Non-Identically Distributed Elements

by   Farzad Shahrivari, et al.

In this paper, we propose a classifier for supervised classification of data vectors with mutually independent but non-identically distributed elements. For the proposed classifier, we derive an upper bound on the error probability and show that the error probability goes to zero as the length of the data vectors grows, even when there is only one training data vector per label available. As a result, the proposed classifier is asymptomatically optimal for this type of data vectors. Our numerical examples show that the performance of the proposed classifier outperforms conventional classification algorithms when the number of training data is small and the length of the data vectors is sufficiently high.


page 1

page 2

page 3

page 4

page 16

page 18

page 23


Universal Neyman-Pearson Classification with a Known Hypothesis

We propose a universal classifier for binary Neyman-Pearson classificati...

High-dimensional quadratic classifiers in non-sparse settings

We consider high-dimensional quadratic classifiers in non-sparse setting...

Optimal 1-NN Prototypes for Pathological Geometries

Using prototype methods to reduce the size of training datasets can dras...

Some Theory For Practical Classifier Validation

We compare and contrast two approaches to validating a trained classifie...

Probabilistic Learning on Manifolds (PLoM) with Partition

The probabilistic learning on manifolds (PLoM) introduced in 2016 has so...

Probability Series Expansion Classifier that is Interpretable by Design

This work presents a new classifier that is specifically designed to be ...