Longitudinal Gaussian process regression (R package)
Identifying risk factors from longitudinal data requires statistical tools that are not restricted to linear models, yet provide interpretable associations between different types of covariates and a response variable. Here, we present a widely applicable and interpretable probabilistic machine learning method for nonparametric longitudinal data analysis using additive Gaussian process regression. We demonstrate that it outperforms previous longitudinal modeling approaches and provides useful novel features, including the ability to account for uncertainty in disease effect times as well as heterogeneity in their effects.READ FULL TEXT VIEW PDF
The well-known generalized estimating equations (GEE) is widely used to
Generalized estimating equation (GEE) is widely adopted for regression
Longitudinal phase space (LPS) provides a critical information about ele...
Across several medical fields, developing an approach for disease
Access and adherence to antiretroviral therapy (ART) has transformed the...
In this review paper, some applications of the mixed effect modeling in
Nonresponse in panel studies can lead to a substantial loss in data qual...
Longitudinal Gaussian process regression (R package)
Biological and medical studies often collect observational longitudinal data, where repeated measurements of the same individuals are made at several time points. This is an indispensable study design for examining disease development and other temporal phenomena, and has been leveraged for example in proteomics , metagenomics [17, 14], other -omics studies and even single-cell transcriptomics [13, 16]
. The measured response variable of interest can be continuous (such as abundance of a protein), discrete (such as number of methylated reads at a certain position), or binary (such as patient condition). Often also a number of explanatory variables – or covariates – are measured for each subject and measurement time point. These can be categorical variables (such as sex, location or whether the subject is diagnosed with a disease or not) or continuous (such as age, time from disease initiation or blood pressure) and inferring which covariates affect the response variable, or are potential risk factors, is a medically motivated question. A large body of literature has focused on statistical analysis of longitudinal data, for the purposes of temporal trend analysis, predictive modeling or studying covariate effects. Observations of one individual are intercorrelated, and specialized statistical techniques are therefore required to draw sensible conclusions. Generalized linear mixed (GLM) models , which contain both fixed and random effects, have become the standard workhorse for longitudinal data analysis due to their general applicability, ease of use, interpretability and software support. The R package lme4  has gained high popularity and become a default choice for fitting GLM models. These models, however, require specifying a parametric (linear) form for the covariate effects, and provide biased inferences when their true effects are nonlinear. Moreover, GLM models cannot capture nonstationary effects that can occur rapidly e.g. near the disease initiation time. See Supplementary material for more background information and related research.
In this work we propose a Bayesian nonparametric method, lgpr, for modeling longitudinal data using additive Gaussian processes (GPs) . It is designed for accurate and interpretable learning of possibly nonlinear or nonstationary effects of individual covariates or their interactions. We demonstrate that our method significantly improves previous longitudinal analysis methods and provides novel features to the longitudinal modeling toolkit: lgpr supports non-Gaussian observation models, includes kernels for interpretable modeling of various types of covariate effects, unique features that can identify heterogeneous biomarkers that are detectable only in a subset of patients, and a possibility to model uncertainty in disease effect times. Furthermore, lgpr can compute covariate relevances for different types of covariates, and perform accurate covariate selection. We have implemented lgpr as an R package  that can be used as a plug-in replacement for lme4. The new tool is summarized in Figure 1a.
In our approach, we use GPs to model the longitudinal response in a Bayesian manner by defining a prior directly for the underlying unknown signal , which is linked to through a likelihood function, which is motivated by the statistical observation model. Properties of a GP are defined by its kernel function , which encodes the covariance of function values at different input points. Additive GPs assume that the signal (or equivalently the kernel) is additive, , and lgpr specifically assumes that each depends only on a single covariate or a pair of covariates (Figure 1b-c). For example, for a model with age and sex as covariates, lgpr can model the signal as , where the first component represents a shared age effect and the second represents sex-specific deviation from it. This additive model structure allows one to define appropriate kernels for different covariates (discrete, continuous or interaction), and to retrieve interpretable covariate effects after model fitting. For components like , we use a zero-sum kernel (Supplementary material: Figure S1) which is similar to the kernel used in  and allows GP modeling that separates shared and category-specific effects.
Parameters of an lgpr model include parameters of the observation model and the kernel (hyper)parameters, among others, and they are assigned robust priors that regularize model fitting (Supplementary material: Figure S2). Bayesian model inference is carried out using the dynamic Hamiltonian Monte Carlo sampler [6, 2], as implemented in the high-performance statistical computation framework Stan 
. We develop a rigorous and interpretable probabilistic covariate selection method which is based on fitting only one model with all covariates and estimating the proportion of variance explained by each signal component and noise. A detailed description of the modeling and inference methodology is given in Supplementary material. Using simulated data, we show thatlgpr
results in better covariate selection accuracy than linear mixed modeling with significance testing (Figure 2a, Supplementary material: Figure S3a) and an earlier Gaussian process regression methodLonGP  (Supplementary material: Figure S3b).
Due to computational convenience, GP regression usually assumes a continuous Gaussian observation model, which is not statistically appropriate when modeling discrete count data as commonly produced in -omics studies. A common approach is to use the Gaussian observation model after first applying a variance-stabilizing transform, such as log-transform, to the response variable, but this is not statistically justified and can lead to biased inferences . As a notable extension to existing literature, our package implements additive GP modeling and covariate selection also in the case of a non-Gaussian observation model (Supplementary material). We have implemented exact inference under the Poisson, Bernoulli, binomial and negative binomial (NB) observation model, and extending the lgpr tool with other observation models is straightforward. We use simulated longitudinal count data to show that using a negative binomial observation model gives better covariate selection accuracy than Gaussian observation model with or without log-transforming the counts (Figure 2b).
Longitudinal studies often comprise a case and control group, and it is common that a clinically determined time of disease initiation for each case individual is marked in the data. In order to reveal phenomena related to disease progression or to identify biomarkers, statistical modeling can utilize the disease-related age, i.e. time from disease initiation or onset, as one covariate that can explain changes in the response variable. Disease effects can be rapid when compared to other effects and expected to occur near the time of disease initiation, and which is why a nonstationary GP kernel is justified for the disease-related age. This approach was used in . We propose a new variance masking kernel for more interpretable modeling of non-stationary disease effects (Supplementary material: Figure S4). The proposed approach separates the effect caused by the disease initiation from a possible baseline difference between cases and controls.
An inherent problem that can confound the analysis of disease effects, is that the disease initiation (or onset) time is difficult to determine exactly. For example in Type 1 Diabetes (T1D), the presence of islet cell autoantibodies in the blood is the earliest known marker of disease initiation , but they can only be measured when the subject visits a doctor. In general, the detected disease initiation time can differ from the true initiation time, and the magnitude of this difference can vary across individuals and response variables. Our tool can account for uncertainty in the disease effect time, and provides an option for the user to set a prior for the effect times globally or relative to the clinically determined onset or initiation time. This important feature could even reveal previously unknown biomarkers, which, at a younger age than known markers, predict the risk of developing a disease. Using simulated data, we show that modeling the effect time uncertainty improves the accuracy of detecting the disease effect (Figure 2c, Supplementary material: Figures S5-S6).
Another challenge in biomedical studies is that many diseases, such as T1D, are heterogeneous , and disease-specific biomarkers are likely to be detectable in only a subset of the diagnosed individuals. Our package also includes unique features for modeling heterogeneous disease effects. The disease effect size is allowed to vary between individuals, yet the statistical power from all diagnosed subjects is utilized (Supplementary material: Figure S4). Using simulated data, we show that if the real effect is not present for all diagnosed individuals, the heterogeneous modeling approach improves covariate selection accuracy (Figure 2d, Supplementary material: Figures S7-S8). Furthermore, the inference results provide information about which case subjects are affected by the disease (Figure 2e,g,l, Supplementary material: Figure S7).
We used lgpr to analyze a longitudinal data set from a recent T1D study , where the longitudinal profile of plasma proteins was measured from 11 cases and 10 controls at nine time points that span the initiation of the disease pathogenesis. We performed the analysis for 1538 proteins separately, using both the homogeneous and heterogeneous disease effect modeling approach. In total, the homogeneous model finds 38 and the heterogeneous model finds 66 proteins associated with the disease-related age covariate, with intersection of 20 proteins. Covariate selection results for all proteins are included in Supplementary Tables S1-S2. Figure 2f shows the normalized measurements for Protein Q8WA1 (O-linked-mannose beta-1,2-N-acetylglucosaminyltransferase 1) and Figures 2h-i show the inferred covariate effects using two different disease effect modeling approaches. The new heterogeneous modeling approach is seen to detect a stronger average disease effect, because it allows the effect sizes to vary between individuals. Moreover, the posterior distributions of individual-specific disease effect magnitude parameters (Figure 2l), reveal four individuals () (Figure 2g), that experience a strong disease effect near the seroconversion time.
The lgpr tool provides several important novel features for modeling longitudinal data, and offers a good balance between flexibility and interpretability. We have shown that the interpretable kernels, heterogeneous disease modeling, uncertainty modeling of effect times, and covariate selection strategy of lgpr significantly improve previous longitudinal modeling methods. The tool has an intuitive syntax, and thus provides an easy transition from the standard tools to Bayesian non-parametric longitudinal regression. It is generally applicable as the data can involve irregular sampling intervals or different number of measurement points over individuals and enjoys state-of-the art posterior sampling efficiency and diagnostics  offered by Stan. Moreover, many types of response variables that are common in postgenomic studies (continuous, discrete, binary, proportion) can be modeled with the proper observation model. The user can choose from the numerous presented modeling options and set various parameter priors (which have widely applicable defaults). Overall, lgpr has potential to become a standard tool for statistical analysis of longitudinal data.
See Supplementary material for full description of the used methodology.
HL conceived the study. JT and HL designed the models and experiments with the help of AV. HM composed and derived the zero-sum kernel. JT developed the lgpr software and carried out the experiments. JT and HL wrote the manuscript with contributions from all authors. All authors read and approved the final manuscript.
This work has been supported by the Academy of Finland grant no. 292660.
The authors acknowledge the computational resources provided by the Aalto Science-IT project.
The longitudinal proteomics data analyzed in this study can be found online at https://doi.org/10.1016/j.jprot.2017.10.004 (raw data) and https://github.com/jtimonen/lgpr-usage (preprocessed data). The home page of the lgpr software is at https://jtimonen.github.io/lgpr-usage/index.html, and it contains installation instructions, tutorials, package manual, and code for reproducing the experiments of this manuscript. A static version of the R package source code is available at http://doi.org/10.5281/zenodo.3563476. The software supports all major operating systems, requires R version 3.4.0 or later and is licensed under the GPL 3 license.
Detailed description of methods and experiments. Overview of related research. Supplementary Figures S1-S8. Supplementary Tables S1-S2.