Model averaging for robust extrapolation in evidence synthesis

by   Christian Röver, et al.

Extrapolation from a source to a target, e.g., from adults to children, is a promising approach to utilizing external information when data are sparse. In the context of meta-analysis, one is commonly faced with a small number of studies, while potentially relevant additional information may also be available. Here we describe a simple extrapolation strategy using heavy-tailed mixture priors for effect estimation in meta-analysis, which effectively results in a model-averaging technique. The described method is robust in the sense that a potential prior-data conflict, i.e., a discrepancy between source and target data, is explicitly anticipated. The aim of this paper to develop a solution for this particular application, to showcase the ease of implementation by providing R code, and to demonstrate the robustness of the general approach in simulations.


On the Value of Target Data in Transfer Learning

We aim to understand the value of additional labeled or unlabeled target...

Meta-Learning PAC-Bayes Priors in Model Averaging

Nowadays model uncertainty has become one of the most important problems...

Dynamically borrowing strength from another study

Meta-analytic methods may be used to combine evidence from different sou...

Heterogeneous Transfer Learning in Ensemble Clustering

This work proposes an ensemble clustering method using transfer learning...

Transfer Meta-Learning: Information-Theoretic Bounds and Information Meta-Risk Minimization

Meta-learning automatically infers an inductive bias by observing data f...

Robustness against conflicting prior information in regression

Including prior information about model parameters is a fundamental step...

Bounds for the weight of external data in shrinkage estimation

Dynamical borrowing of information may be facilitated via shrinkage esti...