A Robust Bayesian Copas Selection Model for Quantifying and Correcting Publication Bias

05/06/2020
by   Ray Bai, et al.
0

The validity of conclusions from meta-analysis is potentially threatened by publication bias. Most existing procedures for correcting publication bias assume normality of the between-study heterogeneity. However, this assumption may not be valid, and the performance of these procedures may be highly sensitive to departures from normality. Further, there exist few measures to quantify the magnitude of publication bias based on selection models. In this paper, we address both of these issues. First, we introduce the robust Bayesian Copas (RBC) selection model. This model serves as a default prior that requires minimal problem-specific tuning, offers robustness to strong assumptions about the distribution of heterogeneity, and facilitates automatic inference of the unknown parameters. Second, we develop a new measure to quantify the magnitude of publication bias. Our measure is easy to interpret and takes advantage of the natural estimation uncertainty afforded by the posterior distribution. We illustrate our proposed approach through simulation studies and analysis of real data sets. Our methods are implemented in the publicly available R package RobustBayesianCopas.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset