DeepAI AI Chat
Log In Sign Up

Improving Power by Conditioning on Less in Post-selection Inference for Changepoints

by   Rachel Carrington, et al.

Post-selection inference has recently been proposed as a way of quantifying uncertainty about detected changepoints. The idea is to run a changepoint detection algorithm, and then re-use the same data to perform a test for a change near each of the detected changes. By defining the p-value for the test appropriately, so that it is conditional on the information used to choose the test, this approach will produce valid p-values. We show how to improve the power of these procedures by conditioning on less information. This gives rise to an ideal selective p-value that is intractable but can be approximated by Monte Carlo. We show that for any Monte Carlo sample size, this procedure produces valid p-values, and empirically that noticeable increase in power is possible with only very modest Monte Carlo sample sizes. Our procedure is easy to implement given existing post-selection inference methods, as we just need to generate perturbations of the data set and re-apply the post-selection method to each of these. On genomic data consisting of human GC content, our procedure increases the number of significant changepoints that are detected from e.g. 17 to 27, when compared to existing methods.


Selective Inference for Additive and Linear Mixed Models

This work addresses the problem of conducting valid inference for additi...

Bootstrap p-values reduce type 1 error of the robust rank-order test of difference in medians

The robust rank-order test (Fligner and Policello, 1981) was designed as...

Post-Selection Inference for Changepoint Detection Algorithms with Application to Copy Number Variation Data

Changepoint detection methods are used in many areas of science and engi...

More Powerful Selective Kernel Tests for Feature Selection

Refining one's hypotheses in the light of data is a commonplace scientif...

Black-box Selective Inference via Bootstrapping

We propose a method for selective inference after a model selection proc...

Granger Causality Testing in High-Dimensional VARs: a Post-Double-Selection Procedure

In this paper we develop an LM test for Granger causality in high-dimens...

Valid post-selection inference in Robust Q-learning

Constructing an optimal adaptive treatment strategy becomes complex when...