Functional Principal Component Analysis for Extrapolating Multi-stream Longitudinal Data

03/09/2019 ∙ by Seokhyun Chung, et al. ∙ University of Michigan 0

The advance of modern sensor technologies enables collection of multi-stream longitudinal data where multiple signals from different units are collected in real-time. In this article, we present a non-parametric approach to predict the evolution of multi-stream longitudinal data for an in-service unit through borrowing strength from other historical units. Our approach first decomposes each stream into a linear combination of eigenfunctions and their corresponding functional principal component (FPC) scores. A Gaussian process prior for the FPC scores is then established based on a functional semi-metric that measures similarities between streams of historical units and the in-service unit. Finally, an empirical Bayesian updating strategy is derived to update the established prior using real-time stream data obtained from the in-service unit. Experiments on synthetic and real world data show that the proposed framework outperforms state-of-the-art approaches and can effectively account for heterogeneity as well as achieve high predictive accuracy.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Among various environments where longitudinal data is gathered, the environment covered in this study is a multi-stream and real-time environment. Recent progress in sensor and data storage technologies has facilitated data collection from multiple sensors in real-time as well as the accumulation of historical signals from multiple similar units during their operational lifetime. This data structure where multiple signals across different units are collected is referred to as multi-stream longitudinal data. Examples include: vital health signals from patients collected through wearable devices Caldara et al. (2014); Magno et al. (2016), battery degradation signals from cars on the road Meeker & Hong (2014); Salamati et al. (2018) and energy usage patterns from different smart home appliances Hsu et al. (2017).

In this article, we propose an efficient approach to extrapolate multi-stream data for an in-service unit through borrowing strength from other historical units. An illustrative example is provided in Figure 1. In this figure, there are historical units and an in-service unit whose index is denoted by . Each unit has identical sensors from which each respective signal forms a stream. Multi-stream data from the in-service unit is partially observed up to the current time instance . Our goal is to extrapolate stream data from the in-service unit over a future period where is the time domain of interest.

Figure 1: Extrapolation of multi-stream longitudinal data for an in-service unit.

In mathematical notation, let and be the respective index sets for all available units including the in-service unit and the units in our historical dataset. For each unit , we have streams of data where . For the th unit, the history of observed data for a specific stream is denoted as , where represents the observation time points and represents the number of observations for signal of unit . The underlying principle of our model is borrowing strength from a sample of curves to predict individual trajectories over a future time period . Without loss of generality, throughout the article we focus on predicting stream , which we refer to as the target stream. Note that the target stream in Figure 1 is stream . To achieve this goal we exploit functional principal component analysis (FPCA), which is a non-parametric tool for functional data analysis Ramsay & Silverman (2005). Indeed, FPCA has recently drawn increased attention due to its flexibility, uncertainty quantification capabilities and the ability to handle sparse and irregular data Peng & Paul (2009); Di et al. (2009); Wang et al. (2016); Xiao et al. (2018). However, advances in FPCA fall short of handling multi-stream data and real-time predictions.

Our overall framework is summarized at the bottom of Figure 1. Specifically, historical signals , , from the target stream are decomposed into a linear combination of orthonormal eigenfunctions that form their functional space. The coefficients of the linear combination are called functional principal component (FPC) scores. With the assumption that the target signal of the in-service unit

lies in the same functional space, a proper estimation of FPC scores associated with

is required. Here, we propose to establish a prior on these FPC scores using information from streams . Specifically, a Gaussian process (GP) prior for FPC scores of is built using a functional semi-metric that measures similarities of streams between historical units and the in-service unit . The underlying principle is that unit will exhibit more commonalities with historical units that exhibit similar trends in streams . For example, if stream denotes degradation trajectories and denotes external factors such as temperature. Then will share more commonalities with a subset of historical signals , , degrading under similar external factors (similar temperatures). This approach allows us to address heterogeneity in the data. Lastly, an empirical Bayesian updating strategy is derived to update the established prior using real-time stream data obtained from the in-service unit.

2 Literature Review

There has been extensive literature on the extrapolation of longitudinal signals under a single stream setting. However, literature has mainly focused on parametric models due to their computational efficiency and ease of implementation

Gebraeel et al. (2005); Gebraeel & Pan (2008); Si et al. (2012, 2013); Kontar et al. (2017). Such models have been applied in healthcare, manufacturing and mobility applications specifically to understand the remaining useful life of operational units. Unfortunately, in real-world applications, parametric modeling is vulnerable to model misspecifications, and if the specified form is far from the truth, predictive results will be misleading. For instance, parametric representations are specifically challenging when data is sparse or when the underlying physical and chemical theories guiding the process are unknown.

To address this issue, recent attempts at non-parametric approaches have been based on FPCA Zhou et al. (2011, 2012); Fang et al. (2015) or multivariate Gaussian processes Álvarez & Lawrence (2011); Saul et al. (2016); Kontar et al. (2018, 2019). These studies show that such non-parametric approaches outperform parametric models in case where functional forms are complex and exhibit heterogeneity. Nevertheless, the foregoing works have dealt with only single stream cases.

On the other hand, the few literature that addressed multi-stream settings have focused on data fusion approaches. Data fusion in this case refers to aggregating all streams into a single stream using fusion mechanisms. In health related applications, this fused stream is coined as a health-index which is often derived through a weighted combination of the data streams Liu et al. (2013); Song & Liu (2018). Such methods require regularly sampled observations and enforce strong parametric assumptions. An alternative data fusion approach includes multivariate FPCA Fang et al. (2017a, b). However, since data fusion methods are operated by aggregating multi-streams into a single or a smaller group of streams, they are not capable of predicting individual stream trajectories and thus have limited applications.

Compared to current literature, our contribution can be summarized as follows. We propose an FPCA-based model that provides individualized predictions in a multi-stream environment. Our model is able to automatically account for heterogeneity in the data and screen the sharing of information between the in-service unit and units in our historical dataset. We then derive a computationally efficient Bayesian updating strategy to update predictions when data is collected in real-time. We demonstrate the advantageous features of our approach compared to state-of-the-art methods using both synthetic and real-world data.

The rest of this paper is structured as follows. In section 3, we briefly revisit the FPCA. In section 4, we discuss our proposed model. Numerical experiments using synthetic data and real-world data are provided in section 5. Finally, section 6 discusses the computational complexity of our model. Technical proofs, a detailed code and additional numerical results are available in the supplementary materials.

3 Brief Review of FPCA

From an FPCA perspective, longitudinal signals observed in a given time domain can be decomposed into a linear combination of orthonormal basis functions with corresponding FPC scores as coefficients. Therefore, FPCA can be regarded as a dimensionality reduction method in which a signal corresponds to a vector in a functional space defined by the basis functions. The basis functions are referred to as eigenfunctions. Let us assume that the longitudinal signals, over a given time domain

, are generated from a square-integrable stochastic process with its mean and covariance defined by a positive semi-definite kernel for . Using Mercer’s theorem on , we have

where presents the th eigenfunction of the linear Hilbert-Schmidt operator

ordered by the corresponding eigenvalues

, . The eigenfunctions form a set of orthonormal basis in the Hilbert space . Following the Karhunen-Loéve decomposition, the centered stochastic process can then be expressed as

where presents FPC scores associated with

. The scores are uncorrelated normal random variables with zero-mean and variance

; that is, and where denotes the Kronecker delta. Also, is additive Gaussian noise.

This idea of projecting signals onto a functional space spanned by eigenfunctions was first introduced by Rao (1958) for growth curves in particular. Basic principles Castro et al. (1986); Besse & Ramsay (1986) and theoretical characteristics Silverman (1996); Boente & Fraiman (2000); Kneip & Utikal (2001) were then developed. These ideas were expanded to longitudinal data settings in the seminal work of Yao et al. (2005). After that, the FPCA was applied and extended to a wide variety of applications, where multiple works tackled fast and efficient estimation of the underlying covariance surface Huang et al. (2008); Di et al. (2009); Peng & Paul (2009); Goldsmith et al. (2013); Xiao et al. (2016).

4 Extrapolation of Multivariate Longitudinal Data

4.1 FPCA for Signal Approximation

Now we discuss our proposed non-parametric approach for extrapolation of multi-stream longitudinal data. Hereon, unless there is ambiguity, we suppress subscripting the target stream with . Using historical signals , , we decompose the target stream as


where represents random effects characterizing stochastic deviations across different historical signals in stream and denotes additive noise. We assume and are independent. Through an FPCA decomposition, we have that . This decomposition is an infinite sum, however, only a small number of eigenvalues are commonly significantly non-zero. For these values the corresponding scores will also be approximately zero. Therefore, we approximate this decomposition as , where is the number of significantly nonzero eigenvalues.


Here we follow the standard estimation procedures in Di et al. (2009) and Goldsmith et al. (2018) to estimate the model parameters where is obtained by local linear smoothers Fan & Gijbels (1996), while is selected to minimize the modified Akaike criterion. Now given that the in-service unit lies in the same functional space spanned by , our task is to find the individual distribution of using the partially observed multi-stream data from unit . Specifically, we aim to find .

4.2 Estimation for Prior Distribution of FPC scores via GP

Next, we estimate the prior distribution of based on the key premise that will behave more similarly to for some units whose signals for are similar to the corresponding signals of the in-service unit . To this end, for , we model a functional relationship between and for as


where for , , and .

The idea here, is to model as a GP with a covariance function defined by a similarity measure between the observed signals, i.e., a functional similarity measure. Specifically, for any , the vector of FPC scores

will follow a multivariate Gaussian distribution


where is constructed such that its th element is for , , , and denotes a covariance function defined as

in which is a semi-metric providing as similarity measure across functions, and and

are hyperparameters for streams

. For notational simplicity, we introduce .

To show the validity of the GP (4), we provide the following lemma.

Lemma 1.

The matrix corresponding to the covariance function a valid covariance matrix.


See Section A.1 in supplementary document. ∎

One possible semi-metric that represents the similarity between two signals can be derived based on FPCA. Let denote the time domain for observations up to . Note that we define since the signals of the in-service unit are available only for . For , , and , the semi-metric based on FPCA for two signals and can be represented as


where is th eigenfunction derived by the FPCA on for and , and is the number of eigenfunctions. We would like to point out that is the difference between the FPC scores of and associated with , which implies that this metric measures the Euclidean distance between two vectors composed of the corresponding FPC scores.

In order to optimize the hyperparameter for the multivariate Gaussian distribution (4), we maximize the marginal log-likelihood function of given . Let denote the observations of signals for units , that is where . Also, let denote the true underlying latent values corresponding to the FPC scores and let . Then the marginal likelihood is given as

where and . The second equality follows from the fact that the error is an additive Gaussian noise. Thus, and the log-likelihood of is

where and . As a consequence, the optimized hyperparameters denoted by are found by maximizing the marginal log-likelihood. More formally, we have

Following multivariate normal theory, the posterior predictive distribution of

, given (4) and , is derived as



Here we note that for each , we can derive and using an independent GP as the FPC scores from different orthonormal basis functions are uncorrelated. This facilitates scalability of computation as for different we can derive and in parallel. As shown in the computational complexity derivations in section 6, this aspect is important specifically in a real-time environment where predictions need to be continuously updated.

Now combining equations (2) and (6), we obtain the predictive mean and variance of as follows


Here note that , and are model parameters corresponding to the estimated FPCA model in (2), where denotes the estimated variance of . Here we recall that as the index is dropped for the target stream.

4.3 Empirical Bayesian Updating with On-line Data

In the previous section, we derive a prior for and using data observed from streams . Here, we develop an empirical Bayesian approach to update and given the target stream () observations from the in-service unit . Specifically, given the prior distributions for each and given the observations at , the posterior is given in Proposition 2.

Proposition 2.

Given that , where the prior distribution of is , and the FPC scores are pairwise independent. Then, the posterior distribution of the FPC scores, such that , is given as




See Section A.2 in supplementary document. ∎

Based on the updated FPC scores for in-service unir , the posterior predicted mean , of for any future time point where is given as

Similarly, the posterior variance can be computed as

where indicates the th element of the covariance matrix .

Despite our focus on the target stream we note that our framework can predict every individual stream for the in-service unit . This ability to provide individualized predictions is a key feature of the proposed methodology compared the data fusion literature that predicts a single aggregated signal. Further, one differentiating factor is that we allow irregularly sampled data from each stream where time points of each signal do not need to be identical or regularly spaced across streams. Indeed, such situations are quite common in practice because most multi-stream data is gathered from different types of sensors. Therefore, the proposed approach is applicable to a wide array of practical situations.

5 Numerical Case Study

5.1 General Settings

In this section, we discuss the general settings used to assess the proposed model, denoted as FPCA-GP. We evaluate the model by performing experiments with both synthetic and real-world data. We report the prediction accuracy at varying time points for the partially observed unit . Specifically, for the time domain , we assume that the on-line signals from the in-service unit are partially observed in the range of , referred as -observation. We set , and for every case study. In the extrapolation interval , we use the mean absolute error (MAE) between the true signal value and its predicted value at evenly spaced test points (denoted as for ) as the criterion to evaluate our prediction accuracy.


We report the distribution of the errors across repetitions using a group of boxplots representing the MAE for the testing unit at diffrent -observation percentiles. Further, we benchmark our method with two other reference methods for comparison: (i) The FPCA approach for single stream settings denoted as FPCA-B. In this method we only consider the target stream Zhou et al. (2011); Kontar et al. (2018). Note that we incorporate our Bayesain updating procedure to update predictions as new data is observed. (ii) The Bayesian mixed effect model with a general polynomial function whose degree is determined through an Akaike information criteria (AIC) Rizopoulos (2011); Son et al. (2013); Kontar et al. (2017). We denote this methods as ME. The ME model intrinsically applies a Bayesian updating scheme as more data is obtained from the in-service unit. Detailed codes for both reference methods are included in our supplementary materials.

5.2 Numerical Study with Synthetic Data

First, we show the numerical results of the proposed model performed on synthetic data. For this experiment, we assume that two streams of data are observed from two different sensors embedded in each unit. The target stream of interest is . To generate signals possessing heterogeneity, we suppose there are two separating environments, denoted by environment I and II. We generate signals for each unit using different underlying functions depending on which environment the unit is in. This is illustrated in Figure 2. As shown in the figure, the underlying trend of the target stream () will vary under different profiles of stream . To relate this setting with real-world application, consider as the degradation level and as the temperature profile. Then from Figure 2, we have that units operating under different temperature profiles will exhibit different trends.

Figure 2: Illustration of generated true curves (90% heterogeneity case).

Figure 3: Illustration of the FPCA-GP and FPCA-B prediction (90% heterogeneity case). The first column illustrates the respective prior mean of FPCA-GP and FPCA-B before updating in the case of 25%-observation.

We generate a training set of units and one testing unit whose signals are partially observed. Also we repeat the experiment times. Historical units are operated in either environment I or II whereas the in-service unit is operated in environment II. The population of historical units is created according to three levels of heterogeneity: (i) 0% heterogeneity where all units in the historical database are operated under environment II (similar to that of the testing unit) (ii) 50% heterogeneity where 25 units are distributed to each environment (iii) 90% heterogeneity where only 5 units are assigned to environment II. Conducting the experiments across a homogeneous setting and a heterogeneous setting, where the in-service unit belongs to the minority group with only 10% ratio, will allow us to investigate the robustness of our approach.

For units in environment I, the signals from respective streams are generated according to and , where and , where

denotes the uniform distribution. For units in environment II, we generate the signals as

and where . Measurement error is assumed similar across both streams.

Figure 2 illustrates training signals in the case of 90% heterogeneity. It is crucial to note that at early stages (ex: ), it is hard to distinguish between the two different trends in stream . We model that on purpose to check if our model can leverage information from stream to uncover the underlying heterogeneity at early stages. This in fact is a common feature in many health related applications, as many diseases remain dormant at early stages and it is only through measuring other factors we can predict there evolution early on.

Figure 4: Box plots of MAEs for comparative models for synthetic data.

The results are illustrated in both Figure 3 and 4. Based on the figures we can obtain some important insights. First, the FPCA-GP clearly outperforms the FPCA-B. This is specifically obvious at early stages () and when the data exhibits heterogeneity (90% and 50% heterogeneity). This confirms the ability of our model to borrow strength from information across different streams to discern the heterogeneity and enhance predictive accuracy at early stages. This result is very motivating specifically since at data from in-service unit is sparse and all signals in stream have similar behaviour which makes it hard to uncover future heterogeneity. It further implies that the FPC scores of the testing unit are appropriately estimated by the proposed approach, as shown in the first column of Figure 3. From the figure, we observe that the estimated prior mean from the FPCA-GP appropriately follows the signals in environment II, whereas the prior mean from the FPCA-B follows the signals in environment I, which is the majority. Second, as expected, prediction errors significantly decrease as the percentiles increase. Thus, our Bayesian updating framework is able to efficiently utilize new collected data and provide more accurate predictions as

increases. Third, the results show that ME behaved the worst and its predictions accuracy merely decreases at later stages. This result illustrates the vulnerabilities of parametric modeling and demonstrates the ability of our non-parametric modeling to avoid model misspecifications. Fourth, the results confirm that even in the case where other streams have no effect on the target stream (

0% heterogeneity) the FPCA-GP is competitive compared to FPCA-B. This highlights the robustness of the FPCA-GP.

5.3 Numerical Study with Real-world Data

In this section, we discuss the numerical study using real-world data provided by the National Aeronautics and Space Administration (NASA). The dataset contains degradation signals collected from multiple sensors on an aircraft turbofan engine. This dataset was generated from a simulation model, developed in Matlab Simulink, called commercial modular aero-propulsion system simulation (C-MAPSS). This system simulates degradation signals from multiple-sensors, installed in several components of an aero turbofan engine, under a variety of environmental conditions. The list of the components includes Fan, LPC, HPC, and LPT, and are illustrated in Figure 5. Refer to Saxena & Simon (2008) for more details about turbofan engine data. The dataset is available at Saxena & Goebel (2008). The dataset is composed of 21 sensor streams from 100 training and 100 testing units. Following the analysis of Liu et al. (2013), we select the 11 most crucial streams. Some signals from these streams are shown in Figure 6. We provide the detailed list and description of sensors in the supplementary materials. In our analysis we truncate the time range and predict the testing signal over the time range .

Figure 5: A schematic diagram of turbofan engine Liu et al. (2012).

Figure 6: Selective examples for degradation signals from turbofan engine data.

Table 1 demonstrates that the MAE results of stream 4 and 15. Note that these two streams have shown to have the largest impact on failure Fang et al. (2017b)

and therefore, due to space limitation, we only focus on their predictive results. Results for other signals are provided in the supplementary materials. Note that we include the standard deviation of MAE across the testing units.

The results clearly show that our approach is far more superior than benchmarks for the real-world data. For all provided cases, the mean of MAE for the FPCA-GP is less than that of the FPCA. Once again this highlights the importance of leveraging information from all streams of the data. Another important insight from this study is that our model was able to outperform the ME even though the curves from Figure 6 seem to exhibit a clear parametric trend. This further highlights the robustness of our method and its ability to safeguard against parametric misspecifications.

Sensor 4 Sensor 15 ()
Model 25% 50% 75% 25% 50% 75%
F-GP 3.26 3.21 3.19 1.62 1.62 1.57
(std.) (0.36) (0.42) (0.48) (0.18) (0.19) (0.33)
F-B 3.49 3.37 3.31 1.76 1.75 1.63
(std.) (0.45) (0.56) (0.54) (0.28) (0.31) (0.36)
ME 3.51 3.38 3.34 1.79 1.77 1.65
(std.) (0.45) (0.59) (0.55) (0.28) (0.32) (0.37)
Table 1: Mean and standard deviation (STD.) of comparative models performed on the NASA dataset. The values for the sensor 15 are scaled by . F-GP and F-B indicates FPCA-FP and FPCA-B, respectively. The best result in each case is boldfaced.

6 Discussion

In this study, we developed a non-parametric statistical model that can extrapolate individual signals in a multi-stream data setting. Using both synthetic and real-world data, we demonstrate our models ability to borrow strength across all streams of data, predict individual streams, account for heterogeneity and provide accurate real-time predictions where an empirical Bayesian approach updates our predictor as new data is observed in real-time. Since we work in the regime of streaming data, the frequency with which we receive data is very high. Due to this, our model needs to be efficient in terms of the time taken to make each update. With the assumption that all signals from streams have observations, the complexity of multivariate FPCA for multi-stream data is Fang et al. (2017b). In our model, the computationally expensive steps are the FPCA for the target stream (Section 4.1) and the implementation of GP for estimating the FPC scores (Section 4.2). Following Xiao et al. (2016), the complexity of the former is . While complexity of a GP with an covariance matrix is Rasmussen & Williams (2005). Given that we implement independent GP models the complexity of estimating the FPC scores is . Combing the above observations, we conclude that the complexity of our procedure is . Typically, we have that , also, in real-time is increasing. Thus, our model is clearly more efficient than multivariate the FPCA and applicable in practice in a real-time streaming environment.

7 Software and Data

Technical proofs, the used dataset, a detailed code and additional numerical results are available in the supplementary materials.


  • Álvarez & Lawrence (2011) Álvarez, M. A. and Lawrence, L. D. Computationally efficient convolved multiple output gaussian processes.

    Journal of Machine Learning Research

    , 12(May):1459–1500, 2011.
  • Besse & Ramsay (1986) Besse, P. and Ramsay, J. Principal components analysis of sampled functions. Psychometrika, 51(2):285–311, 1986.
  • Boente & Fraiman (2000) Boente, G. and Fraiman, R. Kernel-based functional principal components.

    Statistics & probability letters

    , 48(4):335–345, 2000.
  • Caldara et al. (2014) Caldara, M., Colleoni, C., Guido, E., Re, V., Rosace, G., and Vitali, A. A wearable sweat pH and body temperature sensor platform for health, fitness, and wellness applications. In Sensors and Microsystems, volume 268, pp. 431–434. Springer, 2014.
  • Castro et al. (1986) Castro, P. E., Lawton, W. H., and Sylvestre, E. A. Principal modes of variation for processe with continuous sample curves. Technometrics, 28(4):329–337, 1986.
  • Di et al. (2009) Di, C.-Z., Crainiceanu, C. M., Caffo, B., and Punjabi, N. M. Multilevel functional principal component analysis. The Annals of Applied Statistics, 3(1):458–488, 2009.
  • Fan & Gijbels (1996) Fan, J. and Gijbels, I. Local Polynomial Modelling and Its Applications. Chapman and Hall, London, 1996.
  • Fang et al. (2015) Fang, X., Zhou, R., and Gebraeel, N. Z. An adaptive functional regression-based prognostic model for applications with missing data. Reliability Engineering & System Safety, 133:266–274, 2015.
  • Fang et al. (2017a) Fang, X., Gebraeel, N. Z., and Paynabar, K. Scalable prognostic models for large-scale condition monitoring applications. IISE Transactions, 49(7):698–710, 2017a.
  • Fang et al. (2017b) Fang, X., Paynabar, K., and Gebraeel, N. Z. Multistream sensor fusion-based prognostics model for systems with single failure modes. Reliability Engineering & System Safety, 159:322–331, 2017b.
  • Gebraeel & Pan (2008) Gebraeel, N. Z. and Pan, J. Prognostic degradation models for computing and updating residual life distributions in a time-varying environment. IEEE Transactions on Reliability, 57(4):539–550, 2008.
  • Gebraeel et al. (2005) Gebraeel, N. Z., Lawley, M. A., Li, R., and Ryan, J. K. Residual-life distributions from component degradation signals: A bayesian approach. IIE Transactions, 37(6):543–557, 2005.
  • Goldsmith et al. (2013) Goldsmith, J., Greven, S., and Crainiceanu, C. Corrected confidence bands for functional data using principal components. Biometrics, 69(1):41–51, 2013.
  • Goldsmith et al. (2018) Goldsmith, J., Scheipl, F., Huang, L., Wrobel, J., Gellar, J., Harezlak, J., McLean, M. W., Swihart, B., Xiao, L., Crainiceanu, C., and Reiss, P. T. refund: Regression with Functional Data, 2018. URL R package version 0.1-17.
  • Hsu et al. (2017) Hsu, Y.-L., Chou, P.-H., Chang, H.-C., Lin, S.-L., Yang, S.-C., Su, H.-Y., Chang, C.-C., Cheng, Y.-S., and Kuo, Y.-C. Design and implementation of a smart home system using multisensor data fusion technology. Sensors, 17(7):1631, 2017.
  • Huang et al. (2008) Huang, J., Shen, H., and Buja, A. Functional principal components analysis via penalized rank one approximation. Electronic Journal of Statistics, 2:678–695, 2008.
  • Kneip & Utikal (2001) Kneip, A. and Utikal, K. Inference for density families using functional principal component analysis. Journal of the American Statistical Association, 96(454):519–532, 2001.
  • Kontar et al. (2017) Kontar, R., Son, J., Zhou, S., Sankavaram, C., Zhang, Y., and Du, X. Remaining useful life prediction based on the mixed effects model with mixture prior distribution. IISE Transactions, 49(7):682–697, 2017.
  • Kontar et al. (2018) Kontar, R., Zhou, S., Sankavaram, C., Du, X., and Zhang, Y. Nonparametric modeling and prognosis of condition monitoring signals using multivariate gaussian convolution processes. Technometrics, 60(4):484–496, 2018.
  • Kontar et al. (2019) Kontar, R., Raskutti, G., and Zhou, S. Minimizing negative transfer of knowledge in multivariate gaussian processes: A scalable and regularized approach, 2019.
  • Liu et al. (2013) Liu, K., Gebraeel, N. Z., and Shi, J. A data-level fusion model for developing composite health indices for degradation modeling and prognostic analysis. IEEE Transactions on Automation Science and Engineering, 10(3):652–664, 2013.
  • Liu et al. (2012) Liu, Y., Frederick, D. K., DeCastro, J. A., Litt, J. S., and Chan, W. W. User’s guide for the commercial modular aero-propulsion system simulation (C-MAPSS). Technical report, National Aeronautics and Space Administration (NASA), Cleveland, OH, 2012.
  • Magno et al. (2016) Magno, M., Brunelli, D., Sigrist, L., Andri, R., Cavigelli, L., Gomez, A., and Benini, L. Infinitime: Multi-sensor wearable bracelet with human body harvesting. Sustainable Computing: Informatics and Systems, 11:38–49, 2016.
  • Meeker & Hong (2014) Meeker, W. Q. and Hong, Y. Reliability meets big data: opportunities and challenges. Quality Engineering, 26(1):102–116, 2014.
  • Peng & Paul (2009) Peng, J. and Paul, D. A geometric approach to maximum likelihood estimation of the functional principal components from sparse longitudinal data. Journal of Computational and Graphical Statistics, 18(4):995–1015, 2009.
  • Ramsay & Silverman (2005) Ramsay, J. O. and Silverman, B. W. Functional Data Analysis. Springer, NY, 2nd edition, 2005.
  • Rao (1958) Rao, C. R. Some statistical methods for comparison of growth curves. Biometrics, 14:1–17, 1958.
  • Rasmussen & Williams (2005) Rasmussen, C. E. and Williams, C. K. I. Gaussian Processes for Machine Learning. The MIT Press, MA, 2005.
  • Rizopoulos (2011) Rizopoulos, D. Dynamic predictions and prospective accuracy in joint models for longitudinal and time-to-event data. Biometrics, 67(3):819–829, 2011.
  • Salamati et al. (2018) Salamati, S. M., Huang, C. S., Balagopal, B., and Chow, M.-Y. Experimental battery monitoring system design for electric vehicle applications. In 2018 IEEE International Conference on Industrial Electronics for Sustainable Energy Systems (IESES), pp. 38–43. IEEE, 2018.
  • Saul et al. (2016) Saul, A. D., Hensman, J., Vehtari, A., and Lawrence, N. D. Chained gaussian processes. In Artificial Intelligence and Statistics, pp. 1431–1440, 2016.
  • Saxena & Goebel (2008) Saxena, A. and Goebel, K. PHM08 challenge data set, 2008. URL
  • Saxena & Simon (2008) Saxena, A. and Simon, D. Damage propagation modeling for aircraft engine run-to-failure simulation. In International Conference on Prognostics and Health Management, pp. 1–9, Denvor, CO, 2008.
  • Si et al. (2012) Si, X.-S., Wang, W., Hu, C.-H., Zhou, D.-H., and Pecht, M. G. Remaining useful life estimation based on a nonlinear diffusion degradation process. IEEE Transactions on Reliability, 61(1):50–67, 2012.
  • Si et al. (2013) Si, X.-S., Wang, W., Hu, C.-H., Chen, M.-Y., and Pecht, M. G. A wiener-process-based degradation model with a recursive filter algorithm for remaining useful life estimation. Mechanical Systems and Signal Processing, 35(1):219–237, 2013.
  • Silverman (1996) Silverman, B. Smoothed functional principal components analysis by choice of norm. The Annals of Statistics, 24(1):1–24, 1996.
  • Son et al. (2013) Son, J., Zhou, Q., Zhou, S., Mao, X., and Salman, M. Evaluation and comparison of mixed effects model based prognosis for hard failure. IEEE Transactions on Reliability, 62(2):379–394, 2013.
  • Song & Liu (2018) Song, C. and Liu, K. Statistical degradation modeling and prognostics of multiple sensor signals via data fusion: A composite health index approach. IISE Transactions, 50(10):853–867, 2018.
  • Wang et al. (2016) Wang, J.-L., Chiou, J. M., and Müller, H.-G. Functional data analysis. Annual Review of Statistics and Its Application, 3:257–295, 2016.
  • Xiao et al. (2016) Xiao, L., Zipunnikov, V., Ruppert, D., and Crainiceanu, C. Fast covariance estimation for high-dimensional functional data. Statistics and Computing, 26(1-2):409–421, 2016.
  • Xiao et al. (2018) Xiao, L., Cai, L., Checkley, W., and Crainiceanu, C. Fast covariance estimation for sparse functional data. Statistics and Computing, 28(3):511–522, 2018.
  • Yao et al. (2005) Yao, F., Müller, H.-G., and Wang, J.-L. Functional data analysis for sparse longitudinal data. Journal of the American Statistical Association, 100(470):577–590, 2005.
  • Zhou et al. (2012) Zhou, R., Gebraeel, N., and Serban, N. Degradation modeling and monitoring of truncated degradation signals. IIE Transactions, 44(9):793–803, 2012.
  • Zhou et al. (2011) Zhou, R. R., Serban, N., and Gebraeel, N. Degradation modeling applied to residual lifetime prediction using functional data analysis. The Annals of Applied Statistics, 5(2B):1586–1610, 2011.