DeepAI AI Chat
Log In Sign Up

Statistical Classification via Robust Hypothesis Testing

06/09/2021
by   Hüseyin Afşer, et al.
0

In this letter, we consider multiple statistical classification problem where a sequence of n independent and identically distributed observations, that are generated by one of M discrete sources, need to be classified. The source distributions are not known, however one has access to labeled training sequences, of length N, from each source. We consider the case where the unknown source distributions are estimated from the training sequences, then the estimates are used as nominal distributions in a robust hypothesis test. Specifically, we consider the robust DGL test due to Devroye et al. and provide non-asymptotic exponential bounds, that are functions of Nn, on the error probability of classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/28/2021

Statistical Classification via Robust Hypothesis Testing: Non-Asymptotic and Simple Bounds

We consider Bayesian multiple statistical classification problem in the ...
10/29/2021

Some Remarks on Bayesian Multiple Hypothesis Testing

We consider Bayesian multiple hypothesis problem with independent and id...
01/23/2022

Asymptotics for Outlier Hypothesis Testing

We revisit the outlier hypothesis testing framework of Li et al. (TIT 20...
02/09/2023

Do Random and Chaotic Sequences Really Cause Different PSO Performance?

Our topic is performance differences between using random and chaos for ...
10/08/2019

Evaluation of Error Probability of Classification Based on the Analysis of the Bayes Code

Suppose that we have two training sequences generated by parametrized di...
03/27/2017

Adversarial Source Identification Game with Corrupted Training

We study a variant of the source identification game with training data ...
10/15/2012

The Perturbed Variation

We introduce a new discrepancy score between two distributions that give...