Health Assessment and Prognostics Based on Higher Order Hidden Semi-Markov Models

02/12/2020 ∙ by Ying Liao, et al. ∙ 0

This paper presents a new and flexible prognostics framework based on a higher order hidden semi-Markov model (HOHSMM) for systems or components with unobservable health states and complex transition dynamics. The HOHSMM extends the basic hidden Markov model (HMM) by allowing the hidden state to depend on its more distant history and assuming generally distributed state duration. An effective Gibbs sampling algorithm is designed for statistical inference of an HOHSMM. The performance of the proposed HOHSMM sampler is evaluated by conducting a simulation experiment. We further design a decoding algorithm to estimate the hidden health states using the learned model. Remaining useful life (RUL) is predicted using a simulation approach given the decoded hidden states. The practical utility of the proposed prognostics framework is demonstrated by a case study on NASA turbofan engines. The results show that the HOHSMM-based prognostics framework provides good hidden health state assessment and RUL estimation for complex systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the past decade, prognostics has emerged as one of the key enablers for industrial systems to become more reliable, operationally available, and economically maintained [1]. Prognostics technologies aim to monitor the performance of a system (or a component), assess the health status, and predict the remaining useful life (RUL). Based on the predicted future performance, informed asset management strategies can be better planned to reduce operational risks and costs. Prognostics has been used for various engineering systems, such as engines [2, 3, 4], batteries [5, 6], electronics [7, 8], and bearings [9, 10, 11, 12]. In this paper, we propose a new and flexible prognostics framework based on a higher order hidden semi-Markov model (HOHSMM) to assess the health state and estimate the RUL of a system using condition monitoring data. We are particularly motivated by applications where the health state cannot be directly observable and the transition dynamics of the hidden health state are complex (e.g., depending on distant past, non-geometric sojourn time in each state). For example, turbofan engines typically have complex failure mechanisms and unobservable health conditions, which can only be inferred from sensor measurements, and health state transitions are often history-dependent, violating the first order Markovian assumption. Novel techniques that can model and predict such complex transition behaviors are needed.

Prognostics approaches can generally be classified into two categories: model-based approach

[12, 13, 14, 15, 11, 16, 17, 18], and data-driven approach [10, 19, 20, 21, 22, 23, 24, 25, 26, 27]. There are also hybrid models [28, 29, 5, 30] that attempt to combine the strengths of model-based and data-driven approaches by fusing the results from both approaches. The model-based approaches require a good understanding of system physics-of-failure mechanisms. Most of model-based approaches deal with crack, wearing, and corrosion phenomena. For example, Li et al. [12] propose to use the Paris–Erdogan model to predict a bearing’s crack propagation and estimate the crack size. Similarly, Paris–Erdogan equation is used to model fatigue crack growth and a stochastic filtering technique is applied for real-time RUL estimation [13]. Daigle and Goebel [14] develop a model-based prognostics framework for a centrifugal pump that includes models of the most significant damage progression processes such as impeller wear and bearing wear. Damage processes are characterized by a set of parameterized functions (e.g., erosive wear equation, friction coefficient equation), which describe how damage variables evolve in time. More model-based prognostics methods can be found in [11, 16, 15, 17, 18]. Model-based approaches are built on the knowledge of the processes and failure mechanisms occurring in the system of concern, and therefore the approaches allow for identification of the nature and extent of the fault. However, the limitations of model-based approaches are (1) specific domain knowledge for developing physical models is required but may not always be available, and (2) it is usually a challenging task to create dynamic models representing multiple physical processes occurring in complex systems.

With the rapid development of sensor technologies, it has become much easier and less costly to obtain condition monitoring data, including operational and environmental loads as well as performance conditions of the monitored system (e.g., temperature, vibration, pressure, voltage, current) [31]

. Advancements in modern sensor instruments have greatly facilitated data-driven prognostics. The monitoring data provide useful information for building a behavior model to characterize the evolution of system performance. Data-driven approaches use several tools, most of which originate from machine learning and statistical domains

[32]

. Among different machine learning techniques, neural networks and neuro-fuzzy networks are the most commonly used ones. Huang et al.

[10]

propose a prognostics framework for ball bearings based on self-organizing map (SOM) and neural network. The degradation indicator is extracted using SOM and the residual life is predicted based on back propagation neural networks. Gebraeel and Lawley

[19] develop a degradation model based on dynamic wavelet neural network to compute and continuously update residual life distributions of partially degraded component using condition-based sensory signals. A neuro-fuzzy network is used to predict the future health state of a gear in [20]. Furthermore, Wang [21]

develops an adaptive predictor based on neuro-fuzzy approach to forecast the behavior of dynamic systems, where the forecasting is performed by fuzzy logic and the fuzzy system parameters are trained by using neural networks. Various statistical tools have been used for prognostics, including time series analysis models, Kalman and particle filters, and Markov chains. Yan et al.

[22]

employ a logistic regression model to compute the failure probability given condition variables and then use an autoregressive moving average model to predict future performance and estimate the RUL. Swanson et al.

[23]

use Kalman filter to track the time evolution of a crack in a tensioned steel band. More general than Kalman filter, particle filter can be used to perform nonlinear projection of features, which is exploited for RUL estimation of a mechanical component subject to fatigue crack growth

[24].

Most of the existing strategies estimate RUL by predicting the degradation level (e.g., crack size), which is either directly observable or can be quantified based on sensor signals. In practice, the damage level of many engineering systems (e.g., turbofan engine) cannot be easily quantified due to complex failure mechanisms. Hidden Markov models (HMMs) are commonly used to infer the hidden health state directly from the observed data (e.g., sensor measurements) and predict the RUL. An HMM is defined as a statistical model that is used to represent stochastic processes, where the states are not directly observed but can emit corresponding observations [33]. Bunks et al. [25] illustrate the applications of HMMs by using the Westland helicopter gearbox data set and show that HMMs can provide a natural framework for both health diagnostics and prognostics. Tobon-Mejia et al. [26]

develop a mixture of Gaussians hidden Markov model (MoG-HMM) to predict RUL of bearings. They use wavelet packet decomposition technique to extract continuous features from the monitoring signals and then use the features as observations to train MoG-HMMs. The learned MoG-HMMs are then exploited to assess the current condition of a bearing and estimate its RUL. However, standard HMMs have two inherent limitations. One is the assumption of first order Markovian dynamics of the hidden state process. The other is that the state duration (i.e., sojourn time) implicitly follows a geometric distribution. The first order assumption can be restrictive as the health state of complex systems usually evolves depending on its more distant history, not just the current state. Moreover, the duration time in one state does not always follow a geometric distribution. To provide a more adequate representation of temporal structure, hidden semi-Markov model (HSMM) extends HMM by assuming that the state duration is generally distributed. Moghaddass and Zuo

[27] propose an HSMM-based prognostic framework to assess the health conditions and estimate the RULs for gradually degraded systems. They demonstrate the proposed model by a case study on NASA turbofan engines, where principle component analysis (PCA) is used to extract features from multiple sensor measurements. However, the HSMM in [27] still makes the first order Markovian dynamics assumption.

In this paper, we propose a new prognostics framework based on HOHSMMs for systems with unobservable health state and complex transition dynamics. In the HOHSMM-based framework, the important features extracted from the monitoring data are used as observations and the underlying health status of the concerned system is represented in the form of hidden states, which evolve depending not only on the current state but also on its more distant history. The sojourn time in each state is generally distributed and is assumed to follow an explicit distribution. We design an effective Gibbs sampling algorithm for model inference and conduct a simulation experiment to evaluate the performance of the proposed HOHSMM sampler. The learned HOHSMM is then exploited to assess the current health state of a functioning system in operation and predict its RUL. Decoding algorithm is developed for health state assessment using the learned model. The RUL is estimated using a simulation approach by generating paths from the current health state to the failure state. Furthermore, we demonstrate the practical utility of the proposed prognostics framework by conducting a case study on NASA turbofan engines. The main contributions of this paper are two-fold.

  1. Develop a new and advanced HOHSMM-based prognostics framework to assess hidden health state and predict the RUL for complex systems. The proposed HOHSMM includes the HMM and HSMM as two special cases.

  2. Design efficient algorithms for HOHSMM inference, hidden state decoding, and RUL prediction. A Gibbs sampling algorithm is developed for HOHSMM inference and the simulation experiment shows that the designed HOHSMM sampler is effective for learning model parameters from observations. Based on the learned model, a decoding algorithm is developed for hidden health state assessment and an RUL estimation algorithm is developed for prognostics. The case study on NASA turbofan engines shows that the HOHSMM-based prognostics framework provides satisfactory hidden health state assessment and RUL estimation for complex systems.

The remainder of this paper is organized as follows. Section 2 provides preliminaries on higher order hidden Markov model (HOHMM). In Section 3, we develop an HOHSMM and design an effective sampling algorithm for statistical inference. Section 4 presents the hidden state decoding procedure using the learned model. The RUL is predicted using a simulation approach in Section 5. We conduct a simulation experiment to evaluate the performance of the proposed HOHSMM sampler in Section 6. A case study on NASA turbofan engines is demonstrated in Section 7. Section 8 discusses the concluding remarks and future work.

2 Preliminaries on Higher Order Hidden Markov Model

This section provides a brief overview of the Higher Order Hidden Markov Model (HOHMM) by summarizing the main results of [34] and [35]. Based on the HOHMM in [34], we develop the HOHSMM.

An HOHMM consists of two processes: a hidden process , which evolves according to a higher order Markov chain with discrete state space, and a potentially multivariate observed process observed sequentially over a set of discrete time points . HOHMMs extend the idea of basic HMMs by allowing the hidden state sequence to depend on its more distant past. An HOHMM of maximal order makes the following set of conditional independence assumptions:

(1)
(2)

Note that an HOHMM is said to be of maximal order if the distribution of only depends on a subset of . If the distribution of actually varies with the values at all the previous time points, the HOHMM is considered to be of full order .

While the HOHMM relaxes the restrictive first order assumption of the basic HMM, it also brings significant dimensionality challenge. For known state space , the transition probabilities are now indexed by different possible values of the lags and involve a total number of parameters, which increases exponentially with the order . To address this issue, latent allocation variables for and are introduced to shrink the total number of parameters. The allocation variable , taking values from , is the respective latent class that a particular state of is allocated into. The total number of the latent classes () then determines the inclusion of the lag . If , it means that is not an important lag for . If for all , the HOHMM is of full order . Based on the allocation variable , the hidden states are conditionally independent as shown in Figure 1.

We denote the probability that the lag is allocated into latent class by , i.e., . Given the combination of allocated latent classes , the state transition probability is denoted by for ,

(3)

Then the transition probability can be structured through the following hierarchical formulation

(4)
(5)

The parameters and are all non-negative and satisfy the constraints:

  1. , for each combination ;

  2. , for each pair .

(a) without allocation variables
(b) with allocation variables
Figure 1: Dependence structure of a second order hidden Markov model

In such a factorization, the number of parameters is reduced to , which is much smaller than if .

Marginalizing out the latent class indicators , the transition probability has an equivalent form as

(6)

where for all . Thus, the -step transition probability can be obtained as

(7)

Efficient two-stage Gibbs sampling algorithms have been designed in [34] for HOHMM inference. First, a hierarchical Dirichlet prior is assigned on ,

(8)
(9)

The dimension of varies with . Independent priors on the ’s are assigned as

(10)

Finally, the following independent priors are assigned on ’s

(11)

where . The prior assigns increasing probabilities to smaller values of as the lag becomes more distant, reflecting the natural belief that increasing lags have diminishing influence on the distribution of . The generic form of the emission distribution is expressed as , where represents parameters indexed by the hidden states.

The joint distribution of

, and admits the following factorization

(12)

where , representing the history state of . The conditional independence relationships encoded in the factorization are used in deriving MCMC algorithms to draw samples from the posteriors. Detailed sampling algorithms are referred to [34].

3 Higher Order Hidden Semi-Markov Model

In this paper, we extend an HOHMM to a higher order hidden semi-Markov model (HOHSMM), where the hidden state sequence is governed by a semi-Markov chain. The HOHSMM is more flexible since it incorporates additional temporal structure by allowing the state duration to be generally distributed, rather than implicitly following a geometric distribution as in an HOHMM.

3.1 Model development

We first give a brief description of the HSMM and then develop the HOHSMM. There exist several specific models of HSMM which make different assumptions regarding the dependence between state transition and duration, for example, residential time HMMs and explicit duration HMMs [36]. A residential time HMM assumes that the current state and its duration time are determined by the previous state, and independent to the duration of the previous state. An explicit duration HMM assumes that a transition to the current state is independent to the duration of the previous state and the duration is only conditional on the current state. We consider the explicit duration setting in our HOHSMM. Both HSMMs and HOHSMMs with explicit duration exclude state self-transitions because the duration distribution can not fully capture a state’s possible duration time if self-transitions are allowed.

An explicit duration HMM assumes that the underlying stochastic process is governed by a semi-Markov chain [36]. Each state has a variable duration that follows an explicit state-specific distribution and a number of corresponding observations are produced while in the state (illustrated in Figure 2).

Figure 2: An explicit duration HMM

The observation sequence is produced segmentally from the emission distribution indexed by the hidden super-state sequence , where is the number of segments. Observations are assumed to be collected discretely by a unit time, and therefore the number of observations produced in each super-state represents the state duration. For the segment, the state duration is denoted by and denotes the produced observations, where , . In the last segment, the observations may be truncated, and we have .

In the proposed HOHSMM with the explicit duration setting, the hidden super-state sequence is assumed to be governed by a higher order Markov chain and the state duration follows an explicit distribution (e.g., Poisson distribution), denoted by

with the parameters indexed by the specific hidden super-state . An explicit-duration HOHSMM of maximal order is constructed as follows

Figure 3 illustrates a second order HSMM. In this example, the distribution of the hidden super-state depends on its previous two states and , and the duration time in each super-state is generally distributed, following an explicit state-specific distribution.

Figure 3: A second order HSMM

The hierarchical Dirichlet prior assigned for transition distribution parameter in Equations (8) and (9) does not exclude self-transitions. In order to exclude self-transitions in the super-state sequence for an HOHSMM, a modified hierarchical Dirichlet prior is assigned as [37]

(13)
(14)
(15)

Equation (15) ensures that the self-transition probabilities are zeros. Note that is the latent class the hidden super-state (the immediate precedent super-state of ) is allocated into and is the state of . Therefore, to have a valid comparison between and and exclude self-transitions, each possible state of must be allocated to a distinct latent class. In other words, each state of has its own latent class. To do so, we let and , where , and . For the remaining lags, the independent priors on the allocation distribution are assigned as

(16)

The transition probability is modeled as

(17)

Similarly, by introducing latent allocation variables for and with , the hidden super-states are conditionally independent and the model can be represented through the following hierarchical formulation

(18)
(19)

3.2 Model inference

We use the MCMC sampling method for explicit-duration HOHSMM inference. The sampler is designed based on the two-stage Gibbs sampling algorithms for HOHMM [34]. There are additional challenges due to explicit temporal structure, excluding self-transitions, and multiple observed trajectories in the training data.

The first challenge is brought by incorporating explicit temporal structure (i.e., duration distribution), which requires additional sampling to determine the number of segments (i.e., the number of hidden super-states) and the duration time in each state. Existing sampling inference methods for HSMMs often use a message-backwards and sample-forwards technique to address this problem [37]

. However, it is not applicable for an HOHSMM since the backwards messages are extremely difficult to define and compute when higher order transitions present. The reversible jump MCMC provides a statistical inference strategy for Bayesian model determination, where the dimensionality of the parameter vector is typically not fixed (e.g, the multiple change-point problem for Poisson processes)

[38]

. However, it cannot be used to sample change-points of a sequence in an HOHSMM since there is no appropriate mechanism to update the hidden super-states affected by the moves of change-points (e.g., birth of a change-point, death of a change-point). The second challenge is brought by excluding self-transitions. A Dirichlet distribution is assigned as the conjugate prior for transition probability parameters. However, the conjugacy does not exist after setting self-transition probabilities to zeros. A mechanism to recover the conjugacy for updating transition probability parameters is needed. In addition, in many real-world applications, several identical units are typically monitored at the same time to collect sensor data. How to leverage all information provided by multiple observed trajectories (i.e., observation sequences) instead of using just one sequence is the third challenge. We address these difficulties in the following two sections.

3.2.1 Update segmentation

We denote run-to-failure observation sequences by and the observation sequence by , where is the observed length and . These sequences are assumed to be independent. To address the first challenge, we introduce a jump size threshold () to identify change-points. For each observation sequence, if the Euclidean distance between a point and its immediate previous point is greater than , this point is identified as a change-point. The prior of

is assigned to be a uniform distribution with support

, where and are the and percentile values obtained

(a) Identify change-points given jump size threshold . A red cross indicates a change-point that is detected if the difference (absolute value) between it and its previous observation is larger than . Nine change-points are identified and the observation sequence is segmented accordingly as presented by vertical dashed black lines. Clustering labels are derived by clustering the mean values of observations in these ten segments.
(b) Segmentation and hidden super-states initialization after clustering and merging processes. If two adjacent segments have the same clustering label, merge these two segments and use the clustering label as the initialized hidden super-state.
Figure 4: Illustration for updating segmentation and initializing hidden super-states given jump size threshold (using one-dimensional observation as an example)

from the distances between two adjacent observed data points in all observation sequences, respectively. We then update the segmentation of the observation sequences and initialize the hidden super-state sequences iteratively by sampling the jump size threshold .

In each iteration of the HOHSMM sampler, we propose a new threshold from . For each observation sequence, we mark change-points based on the computed distances (illustrated in Figure 4(a)) and the sequence is segmented accordingly. After the initial segmentation, we compute the center of the observed data points for each segment and label the segments by clustering the centers. The hidden super-states are initialized by using the clustering labels. To exclude self-transitions, if two adjacent segments have the same clustering label, we merge these two segments. For example, the first two segments in Figure 4(a) share the same label 1, and these two segments are merged into one. After the clustering and merging processes, we obtain the final segmentation and the initialized hidden super-states of an observation sequence for a given jump size threshold (illustrated in Figure 4(b)). Based on the segmentation results, we also obtain the number of segments and the state duration for each observation sequence, denoted as and , respectively.

The hidden super-state sequence , latent allocation variables , and other parameters , , , , , are updated using the two-stage Gibbs sampling algorithm for the HOHSMM. The first stage is to identify the important lags by sampling from the posterior. Given the determined , we collect the samples of other parameters in the second stage. The obtained samples will be used to compute the acceptance probability for updating jump size threshold . In general MCMC sampling, the acceptance probability can be computed as

(20)

Since the prior of is a uniform distribution and is also proposed from the uniform distribution, it is obvious that the prior ratio and the proposal ratio are equal to 1. The posterior mean of the likelihood can be approximated using the obtained samples [39], which is provided as

(21)

where is a set of posterior samples generated from their posterior distributions.

Given all the collected samples of , the most likely jump size threshold is determined by computing the average value of the samples after burn-in. We then use to update segmentation and repeat the two-stage Gibbs sampling process to obtain the final segmentation and samples for all parameters. Given an explicit distribution for each super-state’s duration, the MLEs for parameters can be easily obtained using the final segmentation result.

3.2.2 The two-stage Gibbs sampling algorithm for HOHSMMs

Given the segmentation, we modify the two-stage Gibbs sampling algorithm in [34] to determine the values of , , , , , , and in the HOHSMM. The first stage is to determine the values of , which is the important lag indicator. Given determined values of , we update other model parameters , , , , , latent allocation variables , and hidden super-state sequence in the second stage. The joint distribution of , and for the sequence can be presented as

(22)

where , and , , . To address the third challenge of multiple trajectories, we use the joint distribution of all observation sequences. Based on the assumption that all sequences are independent, the joint distribution can be obtained as follows

(23)

The conditional independence relationships encoded in the joint distribution are used in deriving the two-stage Gibbs sampling algorithm for HOHSMMs.

Specifically, in the first stage, we identify important lags and the corresponding number of latent classes by sampling . In this stage, we use an approximated model which forces hard allocation of ’s instead of soft allocation. Hard allocation means that, partition the state space into clusters for the lag, then each cluster corresponds to its own latent class. In other words, each state is allocated into one class with probability 1. For example, partition the states into clusters with and for the lag, hard allocation means that , and 3 will be allocated to the first latent class and , and 6 will be allocated to the second one with probability 1. In soft allocation, one state can be allocated into several possible classes with specific probabilities. The mixture probabilities in the approximated model are denoted by , indicating hard clustering while indicates soft allocation.

Based on the approximated model, samples of the parameters are drawn from their respective conditional posteriors following the pre-specified order. We first examine the posteriors of the transition distributions and . There exist computational machineries of sampling from the posteriors in hierarchical Dirichlet process (HDP) models [40]. In the HOHMM, the Dirichlet distribution is the conjugate prior of transition probability parameter, so it is straightforward to update the parameters . However, in our HOHSMM, the method used to exclude self-transitions makes the model not fully conjugate. Specifically, let , which counts the number of transitions from the latent allocation classs to state among all observation sequences where and for . Because of no self-transitions, we have . We consider the posterior distribution of ,

(24)

Because of the extra terms from the likelihood by excluding self-transitions, we cannot reduce this expression to the Dirichlet form over the components of . Therefore, the model is not fully conjugate and new posteriors need to be derived. To recover conjugacy, we introduce auxiliary variables , where . Each is independently drawn from a geometric distribution with specific success parameter [37]. We adjust the sampling algorithm by updating transition parameters from the posterior distribution

Then we compute from Equation (15) and update .

Since the observation sequences are independent, and are updated sequence by sequence. For each sequence, and are sampled by applying a Metropolis-Hastings step and using simulated annealing to facilitate the convergence. The full conditionals of will depend on the choice of the emission distribution. Finally, a stochastic search variable selection (SSVS) method [41] is used to sample from their posteriors and are updated by the latent allocation cluster mapping. In the first stage, important lags can be determined and the number of latent classes for each important lag can be derived based the samples of .

The second stage, given the important lag inclusion result, is to sample parameters , , , , , and iteratively. Given the segmentation, the elements of , and have either multinomial or Dirichlet full conditionals and can be straightforwardly updated. Sampling of , , and emission parameters is the same as described in first stage. Details of the HOHSMM inference method are summarized in Algorithm 1.

4 Health States Decoding

The ultimate purpose of a prognostics framework is to assess the current condition of a system (or component) and to make inferences regarding the remaining useful life (RUL). In this section, we first present how to use the HOHSMM-based prognostics framework to decode the hidden super-states. For an operating system with observation sequence , Equation (22) provides the joint distribution of , and

1:Observation sequences and sample size .
2:Initialization:
3: Compute distances between two adjacent data points in all sequences and use the and percentile values as the lower bound and upper bound of the support .
4: Set initial likelihood value: .
5:for  to  do
6:     Sample .
7:     Update segmentation:
8:      For each , identify change-points given . Compute the center of the observed data points for each segment and initialize hidden super-state sequence by clustering the centers. Merge adjacent segments that have the same label and derive the number of segment and duration times , where , .
9:     Stage 1 (Determine ):
10:      Update :
11:       Let and  , where and for .
12:       Independently sample .
13:       Sample   .
14:      Update : Compute by Equation (15).
15:      Update :
16:       For , sample .
17:       Let and .
18:       Sample .
19:      Update , , , , as in [34].
20:     Stage 2 (Sample with determined ):
21:      Update :
22:       Let , where .
23:       Sample .
24:      Update , , as in Stage 1.
25:      Update : Sample from   .
26:      Update : Sample from .
27:      Update as in Stage 1.
28:     Update : Compute likelihood value from Equation (21).
29:      if then and .
30:      else .
31:      end if
32:end for
33:Determine :
34: Use the average value of sampled after burn-in as the most likely jump size threshold .
35: Given , repeat Update segmentation, Stage 1 and Stage 2 and collect final samples.
36: Compute the MLEs for duration distribution using samples of and for all .
37:, ,