Kernel Stein Tests for Multiple Model Comparison

10/27/2019 ∙ by Jen Ning Lim, et al. ∙ 16

We address the problem of non-parametric multiple model comparison: given l candidate models, decide whether each candidate is as good as the best one(s) or worse than it. We propose two statistical tests, each controlling a different notion of decision errors. The first test, building on the post selection inference framework, provably controls the number of best models that are wrongly declared worse (false positive rate). The second test is based on multiple correction, and controls the proportion of the models declared worse but are in fact as good as the best (false discovery rate). We prove that under appropriate conditions the first test can yield a higher true positive rate than the second. Experimental results on toy and real (CelebA, Chicago Crime data) problems show that the two tests have high true positive rates with well-controlled error rates. By contrast, the naive approach of choosing the model with the lowest score without correction leads to more false positives.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

Code Repositories

multiscale-features

AISTATS 2020. More Powerful Selective Kernel Tests for Feature Selection.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.