On Hypothesis Testing via a Tunable Loss

08/28/2022
by   Akira Kamatsuka, et al.
0

We consider a problem of simple hypothesis testing using a randomized test via a tunable loss function proposed by Liao et al. In this problem, we derive results that correspond to the Neyman–Pearson lemma, the Chernoff–Stein lemma, and the Chernoff-information in the classical hypothesis testing problem. Specifically, we prove that the optimal error exponent of our problem in the Neyman–Pearson's setting is consistent with the classical result. Moreover, we provide lower bounds of the optimal Bayesian error exponent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2023

Improved Random-Binning Exponent for Distributed Hypothesis Testing

Shimokawa, Han, and Amari proposed a "quantization and binning" scheme f...
research
10/29/2021

Some Remarks on Bayesian Multiple Hypothesis Testing

We consider Bayesian multiple hypothesis problem with independent and id...
research
11/29/2021

Hypothesis Testing of Mixture Distributions using Compressed Data

In this paper we revisit the binary hypothesis testing problem with one-...
research
12/31/2017

On Binary Distributed Hypothesis Testing

We consider the problem of distributed binary hypothesis testing of two ...
research
06/15/2023

Optimal Hypothesis Testing Based on Information Theory

There has a major problem in the current theory of hypothesis testing in...
research
06/16/2019

Designing Test Information and Test Information in Design

DeGroot (1962) developed a general framework for constructing Bayesian m...
research
03/22/2020

Hypothesis Testing Approach to Detecting Collusion in Competitive Environments

There is growing concern about the possibility for tacit collusion using...

Please sign up or login with your details

Forgot password? Click here to reset