Minimax optimal testing by classification

06/19/2023
βˆ™
by   Patrik RΓ³bert Gerber, et al.
βˆ™
0
βˆ™

This paper considers an ML inspired approach to hypothesis testing known as classifier/classification-accuracy testing (𝖒𝖠𝖳). In 𝖒𝖠𝖳, one first trains a classifier by feeding it labeled synthetic samples generated by the null and alternative distributions, which is then used to predict labels of the actual data samples. This method is widely used in practice when the null and alternative are only specified via simulators (as in many scientific experiments). We study goodness-of-fit, two-sample (𝖳𝖲) and likelihood-free hypothesis testing (𝖫π–₯𝖧𝖳), and show that 𝖒𝖠𝖳 achieves (near-)minimax optimal sample complexity in both the dependence on the total-variation (𝖳𝖡) separation Ο΅ and the probability of error Ξ΄ in a variety of non-parametric settings, including discrete distributions, d-dimensional distributions with a smooth density, and the Gaussian sequence model. In particular, we close the high probability sample complexity of 𝖫π–₯𝖧𝖳 for each class. As another highlight, we recover the minimax optimal complexity of 𝖳𝖲 over discrete distributions, which was recently established by Diakonikolas et al. (2021). The corresponding 𝖒𝖠𝖳 simply compares empirical frequencies in the first half of the data, and rejects the null when the classification accuracy on the second half is better than random.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset