DeepAI AI Chat
Log In Sign Up

Mismatched Binary Hypothesis Testing: Error Exponent Sensitivity

by   Parham Boroumand, et al.

We study the problem of mismatched binary hypothesis testing between i.i.d. distributions. We analyze the tradeoff between the pairwise error probability exponents when the actual distributions generating the observation are different from the distributions used in the likelihood ratio test, sequential probability ratio test, and Hoeffding's generalized likelihood ratio test in the composite setting. When the real distributions are within a small divergence ball of the test distributions, we find the deviation of the worst-case error exponent of each test with respect to the matched error exponent. In addition, we consider the case where an adversary tampers with the observation, again within a divergence ball of the observation type. We show that the tests are more sensitive to distribution mismatch than to adversarial observation tampering.


page 1

page 2

page 3

page 4


Error Exponents of Mismatched Likelihood Ratio Testing

We study the problem of mismatched likelihood ratio test. We analyze the...

Universal Neyman-Pearson Classification with a Known Hypothesis

We propose a universal classifier for binary Neyman-Pearson classificati...

Minimum Probability of Error of List M-ary Hypothesis Testing

We study a variation of Bayesian M-ary hypothesis testing in which the t...

Anonymous Heterogeneous Distributed Detection: Optimal Decision Rules, Error Exponents, and the Price of Anonymity

We explore the fundamental limits of heterogeneous distributed detection...

Testing Against Independence and a Rényi Information Measure

The achievable error-exponent pairs for the type I and type II errors ar...

Robust Sequential Detection in Distributed Sensor Networks

We consider the problem of sequential binary hypothesis testing with a d...

Some Remarks on Bayesian Multiple Hypothesis Testing

We consider Bayesian multiple hypothesis problem with independent and id...