DeepAI AI Chat
Log In Sign Up

On Hypothesis Testing via a Tunable Loss

by   Akira Kamatsuka, et al.
Shonan Institute of Technology

We consider a problem of simple hypothesis testing using a randomized test via a tunable loss function proposed by Liao et al. In this problem, we derive results that correspond to the Neyman–Pearson lemma, the Chernoff–Stein lemma, and the Chernoff-information in the classical hypothesis testing problem. Specifically, we prove that the optimal error exponent of our problem in the Neyman–Pearson's setting is consistent with the classical result. Moreover, we provide lower bounds of the optimal Bayesian error exponent.


page 1

page 2

page 3

page 4


Improved Random-Binning Exponent for Distributed Hypothesis Testing

Shimokawa, Han, and Amari proposed a "quantization and binning" scheme f...

Some Remarks on Bayesian Multiple Hypothesis Testing

We consider Bayesian multiple hypothesis problem with independent and id...

Hypothesis Testing of Mixture Distributions using Compressed Data

In this paper we revisit the binary hypothesis testing problem with one-...

On Binary Distributed Hypothesis Testing

We consider the problem of distributed binary hypothesis testing of two ...

Optimal Hypothesis Testing Based on Information Theory

There has a major problem in the current theory of hypothesis testing in...

Designing Test Information and Test Information in Design

DeGroot (1962) developed a general framework for constructing Bayesian m...

Hypothesis Testing Approach to Detecting Collusion in Competitive Environments

There is growing concern about the possibility for tacit collusion using...