DeepAI AI Chat
Log In Sign Up

Statistical Classification via Robust Hypothesis Testing

by   Hüseyin Afşer, et al.

In this letter, we consider multiple statistical classification problem where a sequence of n independent and identically distributed observations, that are generated by one of M discrete sources, need to be classified. The source distributions are not known, however one has access to labeled training sequences, of length N, from each source. We consider the case where the unknown source distributions are estimated from the training sequences, then the estimates are used as nominal distributions in a robust hypothesis test. Specifically, we consider the robust DGL test due to Devroye et al. and provide non-asymptotic exponential bounds, that are functions of Nn, on the error probability of classification.


page 1

page 2

page 3

page 4


Statistical Classification via Robust Hypothesis Testing: Non-Asymptotic and Simple Bounds

We consider Bayesian multiple statistical classification problem in the ...

Some Remarks on Bayesian Multiple Hypothesis Testing

We consider Bayesian multiple hypothesis problem with independent and id...

Asymptotics for Outlier Hypothesis Testing

We revisit the outlier hypothesis testing framework of Li et al. (TIT 20...

Do Random and Chaotic Sequences Really Cause Different PSO Performance?

Our topic is performance differences between using random and chaos for ...

Evaluation of Error Probability of Classification Based on the Analysis of the Bayes Code

Suppose that we have two training sequences generated by parametrized di...

Adversarial Source Identification Game with Corrupted Training

We study a variant of the source identification game with training data ...

The Perturbed Variation

We introduce a new discrepancy score between two distributions that give...