DeepAI AI Chat
Log In Sign Up

Boosting Joint Models for Longitudinal and Time-to-Event Data

by   Elisabeth Waldmann, et al.

Joint Models for longitudinal and time-to-event data have gained a lot of attention in the last few years as they are a helpful technique to approach common a data structure in clinical studies where longitudinal outcomes are recorded alongside event times. Those two processes are often linked and the two outcomes should thus be modeled jointly in order to prevent the potential bias introduced by independent modelling. Commonly, joint models are estimated in likelihood based expectation maximization or Bayesian approaches using frameworks where variable selection is problematic and which do not immediately work for high-dimensional data. In this paper, we propose a boosting algorithm tackling these challenges by being able to simultaneously estimate predictors for joint models and automatically select the most influential variables even in high-dimensional data situations. We analyse the performance of the new algorithm in a simulation study and apply it to the Danish cystic fibrosis registry which collects longitudinal lung function data on patients with cystic fibrosis together with data regarding the onset of pulmonary infections. This is the first approach to combine state-of-the art algorithms from the field of machine-learning with the model class of joint models, providing a fully data-driven mechanism to select variables and predictor effects in a unified framework of boosting joint models.


page 1

page 2

page 3

page 4


Extension of the Gradient Boosting Algorithm for Joint Modeling of Longitudinal and Time-to-Event data

In various data situations joint models are an efficient tool to analyze...

Bayesian blockwise inference for joint models of longitudinal and multistate processes

Joint models (JM) for longitudinal and survival data have gained increas...

Boosting Distributional Copula Regression

Capturing complex dependence structures between outcome variables (e.g.,...

Deselection of Base-Learners for Statistical Boosting – with an Application to Distributional Regression

We present a new procedure for enhanced variable selection for component...