# Reliability Estimation in Coherent Systems

Usually, methods evaluating system reliability require engineers to quantify the reliability of each of the system components. For series and parallel systems, there are some options to handle the estimation of each component's reliability. We will treat the reliability estimation of complex problems of two classes of coherent systems: series-parallel, and parallel-series. In both of the cases, the component reliabilities may be unknown. We will present estimators for reliability functions at all levels of the system (component and system reliabilities). Nonparametric Bayesian estimators of all sub-distribution and distribution functions are derived, and a Dirichlet multivariate process as a prior distribution is presented. Parametric estimator of the component's reliability based on Weibull model is presented for any kind of system. Also, some ideas in systems with masked data are discussed.

• 4 publications
• 1 publication
• 4 publications
06/20/2018

### Estimation of component reliability in repairable series systems with masked cause of failure by means of latent variables

In this work, we propose two methods, a Bayesian and a maximum likelihoo...
02/12/2022

### Optimal Redundancy Allocation in Coherent Systems with Heterogeneous Dependent Components

This paper is concerned with the optimal number of redundant allocation ...
04/09/2021

### Current Overview of Statistical Fiber Bundles Model and Its Application to Physics-based Reliability Analysis of Thin-film Dielectrics

In this paper, we present a critical overview of statistical fiber bundl...
07/16/2021

### A study on reliability of a k-out-of-n system equipped with a cold standby component based on copula

A k-out-of-n system consisting of n exchangeable components equipped wit...
08/07/2018

### Allocations of Standby Redundancies to Coherent Systems with Dependent Components

In the context of industrial engineering, standby allocation strategy is...
01/20/2021

### Operation comfort vs. the importance of system components

The purpose of this report is to look at the measures of importance of c...
06/19/2016

### Evaluating the predicted reliability of mechatronic systems: state of the art

Reliability analysis of mechatronic systems is a recent field and a dyna...

### 2.1 Probability relations

In this section, important results and properties of the PSS and SPS are presented. Before it, we present these results for series and parallel systems that will facilitate the understanding of the results for PSS and SPS, once series and parallel systems are simpler.

#### 2.1.1 Series and parallel systems

First consider a parallel system with components. Let be the failure time of th component with marginal distribution function (DF) and be the system failure time. The indicator of the component whose failure produced the system to fail is when , . The th sub-distribution function evaluated at a time is the probability that the system survives at most to time and the last component to fail is the th component, that is, .

Let

be the joint distribution function, in which continuous partial derivatives are assumed over all arguments. The following theorem establishes the relation between the joint distribution function with the

th sub-distribution .

###### Theorem 1

The derivative of , , is equal to the partial derivative of at the th component, evaluated at .

Because the life of the components are assumed to be mutually -independent,

 F(t1,…,tm)=m∏j=1Fj(tj). (2.1)

Using the fact in (2.1) and the Theorem 1,

 ddtF∗j(t)=uj(t)m∏j=1Fj(t), (2.2)

where is the reversed hazard rate (RHR) of the th component:

 uj(t)=fj(t)Fj(t)=ddtlnFj(t). (2.3)

From (2.3) one can write

 Fj(t)=exp{−∫∞tuj(y)dy}. (2.4)

Letting , (2.2) becomes

 ddtF∗j(t)=uj(t)exp{−∫∞tu(y)dy}, (2.5)

Taking now the sum for in both sides of (2.5), we obtain

 m∑j=1ddtF∗j(t) = u(t)exp{−∫∞tu(y)dy} (2.6) = ddtexp{−∫∞tu(y)dy}.

Consequently,

 m∑j=1F∗j(t)=exp{−∫∞tu(y)dy},

which combined with (2.5) leads to

 uj(t)=dF∗j(t)/dt∑mj=1F∗j(t). (2.7)

Finally, (2.4) implies

 Fj(t)=exp{−∫∞tdF∗j(y)∑mj=1F∗j(y)}, (2.8)

that is, the relationship of interest between marginal distribution functions and sub-distribution functions.

Unfortunately, the expression in (2.8) does not work for the case with jump points. To obtain a version of (2.8) in the presence of jumps, we introduce the following definition and theorem.

###### Definition 1

For simplicity, consider the case of . The function based on the sub-distributions and is

where is integration over disjoint open intervals that do not include the jump points of and is product over jump points of .

The next result, although restricted to , extends expression (2.8) in the sense that it can include disjoint jump points.

###### Theorem 2

The sub-distribution functions and determine (uniquely) the distribution function for by .

An analogous development can be performed for a series system with components, in which and , if . The version of (2.8) for a series system is given by (Salinas-Torres et al., 2002):

 Fj(t)=1−exp{∫t0dR∗j(y)∑mj=1R∗j(y)}, (2.9)

in which is the sub-reliability function for th component.

Unfortunately, the expression in (2.9) does not work for the case with jump points. To obtain a version of (2.9) in the presence of jumps, we introduce the following definition and theorem.

###### Definition 2

For simplicity, consider the case of . The function based on the sub-reliability functions and is

where is integration over disjoint open intervals that do not include the jump points of and is product over jump points of .

The next result, although restricted to , extends expression (2.8) in the sense that it can include disjoint jump points.

###### Theorem 3

The sub-reliability functions and determine (uniquely) the distribution function for by .

For more details about relations among the distributions and sub-distribution functions (sub-reliability functions) can be found in
Salinas-Torres et al. (2002) and Polpo e Sinha (2011) for series system and Polpo e Pereira (2009) for parallel system.

In next Sub-section the relations among the distributions and sub-distribution functions are presented for a more general class of systems - SPS and PSS.

#### 2.1.2 PSS and SPS

We restrict ourselves to a system with three components (), given in Figure 1.3 and in Figure 1.4.

Let and be the lifetimes of three components of an PSS and SPS with marginal distribution functions (DF) , , and , respectively. The restriction here is that the three sets of jump points of , , and must be disjoint. The indicator of the component whose failure produced the system to fail is when , when , and when . Let be the sub-distribution function of the th component and the distribution function of the system. The following properties can be proved.

###### Property 1

The sub-distribution functions (SDF) , , and determine the DF of the system,

 F(t)=F∗1(t)+F∗2(t)+F∗3(t). (2.10)
1. ;

2. ;

3. ;

4. .

###### Property 3

The set of jump points and are the same, where . Because , , and have disjoint set of jump points, so have , , and .

###### Property 4

If for , and 1 for , then is the largest support point of the system.

The lifetime of the SPS is and the system reliability of s-independent components is

 R(t)=[1−F1(t)][1−F2(t)F3(t)]. (2.11)

The lifetime of the PSS is and the system reliability of s-independent components is

 R(t)=1−F1(t){1−[1−F2(t)][1−F3(t)]}. (2.12)
###### Property 5

The SDF of the SPS can be expressed using the marginal DF of the components by

 F∗1(t) = t∫0[1−F2(t)F3(t)]dF1(t), F∗2(t) = t∫0[1−F1(t)]F3(t)dF2(t), F∗3(t) = t∫0[1−F1(t)]F2(t)dF3(t), (2.13)

and the SDF of the PSS can be expressed using the marginal DF of the components by

 F∗1(t) = t∫0{1−[1−F2(t)][1−F3(t)]}dF1(t), F∗2(t) = t∫0F1(t)[1−F3(t)]dF2(t), F∗3(t) = t∫0F1(t)[1−F2(t)]dF3(t). (2.14)

Our interest is to obtain the inverse of (5) and (5); that is, to express the DFs , and as a function of the SDF (, , ). These inverses are presented with the following definitions and theorems.

###### Definition 3

The functions , and based on sub-distributions , , and are

 Φs(F∗1,F∗2,F∗3,t) ≡ 1−⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩exp⎡⎢ ⎢ ⎢ ⎢ ⎢⎣⊂t∫0−dF∗1(v)1−3∑j=1F∗j(v)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ D∏v≤tF∗1⎡⎢ ⎢ ⎢ ⎢ ⎢⎣1−3∑j=1F∗j(v+)1−3∑j=1F∗j(v−).⎤⎥ ⎥ ⎥ ⎥ ⎥⎦⎫⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪⎭, Φp(F∗1,F∗2,F∗3,t) ≡

The functions (for a series system), and (for a parallel system) are the versions with three components for those presented in Polpo e Sinha (2011) and Polpo e Pereira (2009), respectively. First, Theorem 4 states the relation between and , , and .

###### Theorem 4

The SDF , , and determine (uniquely) the DF of an SPS for by , and the DF of a PSS for by .

The next definition gives the functions (for the SPS), and (for the PSS).

###### Definition 4

The functions , and , based on sub-distributions , , and , are

 Φsp(F∗1,F∗2,F∗3,t) ≡ D% ∏v>tF∗2⎡⎢ ⎢ ⎢ ⎢ ⎢⎣3∑j=1F∗j(v−)−Φs(F∗1,F∗2,F∗3,v−)3∑j=1F∗j(v+)−Φs(F∗1,F∗2,F∗3,v+)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, Φps(F∗1,F∗2,F∗3,t) ≡
###### Theorem 5

The SDF , , and determine (uniquely) the DF of an SPS for by , and the DF of a PSS for by .

Note that Theorem 5 can be easily rewritten to obtain the relation of DF and the SDF.

The proofs of Theorems 4 and 5 can be seen in Polpo et al. (2013).

Theorem 5 provides an important relation between the SDF and DF, for both SPS and PSS. Using this result, in the next section, we have developed the nonparametric Bayesian estimator for the DF of the system’s components.

### 2.2 Bayesian Analysis

This section describes a Bayesian reliability approach to SPS and PSS. We have derived a nonparametric Bayesian estimator of the distribution function using the multivariate Dirichlet process
(Salinas-Torres et al., 2002). From Property 1, we have that the sub-distribution functions are related to the system distribution function by a sum. Considering that , we have the restriction that these four quantities have a sum equal to , and that the set of possible points for is the four-dimensional simplex, or for the non-singular form. In this case, for a fixed , we have that a natural prior choice is the Dirichlet distribution, and for any , we have the Dirichlet multivariate process. In this Section a nonparametric estimator for the distribution function of the components in an SPS or in a PSS, and using the Dirichlet process, we have a complete distribution for the set . In this case, our parameters are the functions that we want to estimate, giving us a nonparametric framework.

Consider a sample of size and the observed data are , in which for SPS and
for PSS. Besides, if , for and . Equivalently, for each ,the random variables are observed:

 nF∗jn(t)=n∑i=1I(Ti≤t,δi=j),  for j=1,2,3,

in which is a indicator function of set .

The function is empirical sub-distribution function of th component. If is the empirical distribution function corresponding to the observations , thus for each ,

 nFn(t)=nF∗1n(t)+nF∗2n(t)+nF∗3n(t).

For each , let the realization of , in which

 kj(t)=n∑i=1I(ti≤t,δi=j),  for j=1,2,3.

In this context, for each , the likelihood function corresponds to the likelihood of a multinomial model being , for , and , that is,

 L = P(nF∗1n=k1(t),nF∗2n=k2(t),nF∗3n=k3(t)) (2.15) ∝[F∗1(t)]k1(t)[F∗2(t)]k2(t)[F∗3(t)]k3(t)[1−F(t)]n−∑3j=1kj(t)

The prior distribution for is constructed from the characterization of the multivariate Dirichlet process, defined in
Salinas-Torres et al. (1997) and it may have the following simplified version.

###### Definition 5

Let be a sample space, be finite positive measures defined over , and be a random vector having a Dirichlet distribution with parameters . Consider Dirichlet processes, , with , . All these processes and are mutually -independent random quantities. Define . The is a Dirichlet multivariate process with parameter measures .

In the context of SPS and PSS, consider , and , for . Then, the vector of components sub-distribution functions is and the prior distribution is given by

 F∗(t)∼D(α1(0,t],α2(0,t],α3(0,t];3∑j=1αj(t,∞)). (2.16)

Combining the prior distribution (2.16) and the likelihood function in (2.15), the posterior distribution of is, for each ,

 (F∗1(t),F∗2(t),F∗3(t))|Data∼D(α1(0,t]+nF∗1n(t),α2(0,t]+nF∗2n(t), α3(0,t]+nF∗3n(t);3∑j=1αj(t,∞)+n−3∑j=1nF∗jn(t)).

Thus, the posterior means of and are given by

 ˆF∗j(t)=pααj(0,t]3∑ℓ=1αℓ(0,∞)+(1−pα)F∗jn(t), (2.17)

where , and

 ˆF(t)=3∑j=1ˆF∗j(t). (2.18)

These Bayesian estimators are strongly s-consistent. For instance, using the Glivenko Cantelli Theorem (Billingsley, 1985), it can be shown that converges to uniformly with probability 1.

If , the Bayesian estimator of is given by

 ˆρj=limt↑∞ˆF∗j(t)=αj(0,∞)n+3∑ℓ=1αℓ(0,∞)+n∑i=1I(δi=j)n+3∑ℓ=1αℓ(0,∞). (2.19)

Let the   distinct order statistics  of be . Set , and , . Define

 Is(t)=exp⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩13∑j=1αj(0,∞)+nt∫0−dα1(0,s]1−ˆF(s)⎫⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪⎭, (2.20)
 Πs(t)=∏i:T∙(i)≤t3∑j=1αj(T∙(i),∞)+n−Ni−d1i3∑j=1αj(T∙(i),∞)+n−Ni. (2.21)
 Isp(t)=exp⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩13∑j=1αj(0,∞)+n∞∫t−dα2(0,s]ˆF(s)−ˆF1(s)⎫⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪⎭, (2.22)
 Πsp(t)=∏i:T∙(i)>t3∑j=1αj(0,T∙(i)]+Nin+3∑j=1αj(0,∞)−ˆF1(T∙(i))3∑j=1αj(0,T∙(i)]+Ni+d2in+3∑j=1αj(0,∞)−ˆF1(T∙(i)). (2.23)
 Ip(t)=exp⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩13∑j=1αj(0,∞)+n∞∫t−dα1(0,s]ˆF(s)⎫⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪⎭, (2.24)
 Πp(t)=∏i:T∙(i)>t3∑j=1αj(0,T∙(i)]+Ni3∑j=1αj(0,T∙(i)]+Ni+d1i. (2.25)
 Ips(t)=exp⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩13∑j=1αj(0,∞)+nt∫0−dα2(0,s]ˆF1(s)−ˆF(s)⎫⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪⎭, (2.26)

and

 Πps(t)=∏i:T∙(i)≤tˆF1(T∙(i))−3∑j=1αj(0,T∙(i)]+Ni+d2in+3∑j=1αj(0,∞)ˆF1(T∙(i))−3∑j=1αj(0,T∙(i)]+Nin+3∑j=1αj(0,∞). (2.27)

The main result of this study is given in the following paragraph.

###### Theorem 6

Suppose that are continuous on , for each , and , , and have no common discontinuities. Then, for , and SPS, we have that

 ˆF1(t) = E[F1(t)|data] = Φs(ˆF∗1,ˆF∗2,ˆF∗3,t)=1−Is(t)Πs(t), ˆF2(t) = E[F2(t)|data] = Φsp(ˆF∗1,ˆF∗2,ˆF∗3,t)=Isp(t)Πsp(t);

and, for PSS,

 ˆF1(t) = E[F1(t)|data] = Φp(ˆF∗1,ˆF∗2,ˆF∗3,t)=Ip(t)Πp(t), ˆF2(t) = E[F2(t)|data] = Φps(ˆF∗1,ˆF∗2,ˆF∗3,t)=1−Ips(t)Πps(t).

, and are the nonparametric estimators of , and , respectively, based on posterior means.

As in Theorem 5, it is straightforward to express the nonparametric estimator of . In the next section, we extend the estimators to a general case of .

### 2.3 Bayesian estimator for m≥4

The extension of the nonparametric Bayesian estimator for SPS and PSS, given in Section 2.2, is based on rewriting the system representation in a proper simplified version of the general case () to the one given with , which has a solution given in Theorem 6. Considering the SPS and the PSS presented in Fig. 2.1, we specify how to rewrite the system representation and estimation of their components reliabilities in the following.

We provided how to estimate (for the SPS), and (for the PSS), because the reliability estimation of the other components are straightforward once these two are given. The idea of the extension is to represent the systems in a simple version with three components (Figures 1.3 and 1.4). In this case, to estimate the reliability of , we use the SPS solution considering , , and (Figure 2.2); and for the estimation of , we use the PSS solution considering , , and (Figure 2.2). It must be noted that other more complex systems can also be considered, but the task is only to simplify the representation of the system as one of either the PSS or SPS given in Figures 1.3 and 1.4.

Furthermore, both the classes (SPS and PSS) are important so as to have a more general solution, because we have the restriction that two different components cannot have the same failure time, which in turn would result in different representations giving more options to the reliability estimation problem. Considering the PSS given in Figure 2.1, we can write their SPS representation as that presented in Figure 2.3. The component’s reliability of the original PSS (Figure 2.1) can be estimated using the PSS result of Theorem 6, which has a simple solution. However, as the SPS representation (Figure 2.3) has some components repeated, the SPS result of Theorem 6 is not applicable. Thus, the solutions for both SPS and PSS are important and can be used in different situations.

### 2.4 Simulated datasets

This section presents two examples to demonstrate the estimation steps and to show the quality of the Bayesian nonparametric estimator. The estimation steps for the PSS are very similar to those for the SPS, and for the sake of brevity, we have omitted them. The estimation steps for SPS are as follows.

1. Defining priors: The prior measures () are prior guesses of the SDF (), but it is not simple to elicit these measures. It is easier to elicit the priors for the DF (), and use (5) for the SPS to evaluate the prior measures (for PSS we can use (5

)). We chose the exponential distribution with mean 1 as the prior guess for each of the three components DF. By evaluating the prior measures using (

5), we have , and . Note that this prior is not very informative because the measure of the whole parameter space is only one (). Also, we have that , and .

2. Obtaining Posteriors: The  posterior  processes  for  the  SDF  functions  are     ; and from (2.17), we have

 ˆF∗1(t) = nF∗1n(t)+(