A Bayesian Multiple Testing Paradigm for Model Selection in Inverse Regression Problems

07/15/2020
by   Debashis Chatterjee, et al.
0

In this article, we propose a novel Bayesian multiple testing formulation for model and variable selection in inverse setups, judiciously embedding the idea of inverse reference distributions proposed by Bhattacharya (2013) in a mixture framework consisting of the competing models. We develop the theory and methods in the general context encompassing parametric and nonparametric competing models, dependent data, as well as misspecifications. Our investigation shows that asymptotically the multiple testing procedure almost surely selects the best possible inverse model that minimizes the minimum Kullback-Leibler divergence from the true model. We also show that the error rates, namely, versions of the false discovery rate and the false non-discovery rate converge to zero almost surely as the sample size goes to infinity. Asymptotic α-control of versions of the false discovery rate and its impact on the convergence of false non-discovery rate versions, are also investigated. Our simulation experiments involve small sample based selection among inverse Poisson log regression and inverse geometric logit and probit regression, where the regressions are either linear or based on Gaussian processes. Additionally, variable selection is also considered. Our multiple testing results turn out to be very encouraging in the sense of selecting the best models in all the non-misspecified and misspecified cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2022

On the testing of multiple hypothesis in sliced inverse regression

We consider the multiple testing of the general regression framework aim...
research
02/03/2023

Trade-off between prediction and FDR for high-dimensional Gaussian model selection

In the context of the high-dimensional Gaussian linear regression for or...
research
12/25/2018

Optimal False Discovery Control of Minimax Estimator

In the analysis of high dimensional regression models, there are two imp...
research
01/02/2017

Bayesian model selection consistency and oracle inequality with intractable marginal likelihood

In this article, we investigate large sample properties of model selecti...
research
06/10/2020

Convergence of Pseudo-Bayes Factors in Forward and Inverse Regression Problems

In the Bayesian literature on model comparison, Bayes factors play the l...
research
06/30/2021

Whiteout: when do fixed-X knockoffs fail?

A core strength of knockoff methods is their virtually limitless customi...

Please sign up or login with your details

Forgot password? Click here to reset