DeepAI AI Chat
Log In Sign Up

Sub-Gaussian Error Bounds for Hypothesis Testing

01/01/2021
by   Yan Wang, et al.
0

We interpret likelihood-based test functions from a geometric perspective where the Kullback-Leibler (KL) divergence is adopted to quantify the distance from a distribution to another. Such a test function can be seen as a sub-Gaussian random variable, and we propose a principled way to calculate its corresponding sub-Gaussian norm. Then an error bound for binary hypothesis testing can be obtained in terms of the sub-Gaussian norm and the KL divergence, which is more informative than Pinsker's bound when the significance level is prescribed. For M-ary hypothesis testing, we also derive an error bound which is complementary to Fano's inequality by being more informative when the number of hypotheses or the sample size is not large.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/20/2022

Quantum Fluctuation-Response Inequality and Its Application in Quantum Hypothesis Testing

We uncover the quantum fluctuation-response inequality, which, in the mo...
05/11/2022

Second-Order Asymptotics of Hoeffding-Like Hypothesis Tests

We consider a binary statistical hypothesis testing problem, where n ind...
10/27/2021

Minimum Probability of Error of List M-ary Hypothesis Testing

We study a variation of Bayesian M-ary hypothesis testing in which the t...
03/12/2019

The All-or-Nothing Phenomenon in Sparse Linear Regression

We study the problem of recovering a hidden binary k-sparse p-dimensiona...
10/29/2021

Some Remarks on Bayesian Multiple Hypothesis Testing

We consider Bayesian multiple hypothesis problem with independent and id...
05/09/2019

Limits of Deepfake Detection: A Robust Estimation Viewpoint

Deepfake detection is formulated as a hypothesis testing problem to clas...