Model averaging for robust extrapolation in evidence synthesis

05/28/2018
by   Christian Röver, et al.
0

Extrapolation from a source to a target, e.g., from adults to children, is a promising approach to utilizing external information when data are sparse. In the context of meta-analysis, one is commonly faced with a small number of studies, while potentially relevant additional information may also be available. Here we describe a simple extrapolation strategy using heavy-tailed mixture priors for effect estimation in meta-analysis, which effectively results in a model-averaging technique. The described method is robust in the sense that a potential prior-data conflict, i.e., a discrepancy between source and target data, is explicitly anticipated. The aim of this paper to develop a solution for this particular application, to showcase the ease of implementation by providing R code, and to demonstrate the robustness of the general approach in simulations.

READ FULL TEXT
02/12/2020

On the Value of Target Data in Transfer Learning

We aim to understand the value of additional labeled or unlabeled target...
12/24/2019

Meta-Learning PAC-Bayes Priors in Model Averaging

Nowadays model uncertainty has become one of the most important problems...
06/04/2018

Dynamically borrowing strength from another study

Meta-analytic methods may be used to combine evidence from different sou...
01/20/2020

Heterogeneous Transfer Learning in Ensemble Clustering

This work proposes an ensemble clustering method using transfer learning...
11/04/2020

Transfer Meta-Learning: Information-Theoretic Bounds and Information Meta-Risk Minimization

Meta-learning automatically infers an inductive bias by observing data f...
10/18/2021

Robustness against conflicting prior information in regression

Including prior information about model parameters is a fundamental step...
04/06/2020

Bounds for the weight of external data in shrinkage estimation

Dynamical borrowing of information may be facilitated via shrinkage esti...