Confidence intervals with maximal average power

05/10/2019
by   Christian Bartels, et al.
0

In this paper, we propose a frequentist testing procedure that maintains a defined coverage and is optimal in the sense that it gives maximal power to distinguish between hypotheses sampled from a pre-specified distribution (the prior distribution). Selecting a prior distribution allows to tune the decision rule. This leads to an increase of the power, if the true data generating distribution happens to be compatible with the prior. Similarly, it results in confidence intervals that are more precise, if the actually observed data happens to be compatible with the prior. It comes at the cost of losing power and having larger confidence intervals, if the data generating distribution or the observed data are incompatible with the prior. For constructing the testing procedure, the Bayesian posterior probability distribution is used. The proposed approach is simple to implement and does not rely on Minimax optimization. We illustrate the proposed approach for a binomial experiment, which is sufficiently simple such that the decision sets can be illustrated in figures, which should facilitate an intuitive understanding.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset