2.1 Probability relations
In this section, important results and properties of the PSS and SPS are presented. Before it, we present these results for series and parallel systems that will facilitate the understanding of the results for PSS and SPS, once series and parallel systems are simpler.
2.1.1 Series and parallel systems
First consider a parallel system with components. Let be the failure time of th component with marginal distribution function (DF) and be the system failure time. The indicator of the component whose failure produced the system to fail is when , . The th subdistribution function evaluated at a time is the probability that the system survives at most to time and the last component to fail is the th component, that is, .
Let
be the joint distribution function, in which continuous partial derivatives are assumed over all arguments. The following theorem establishes the relation between the joint distribution function with the
th subdistribution .Theorem 1
The derivative of , , is equal to the partial derivative of at the th component, evaluated at .
Because the life of the components are assumed to be mutually independent,
(2.1) 
Using the fact in (2.1) and the Theorem 1,
(2.2) 
where is the reversed hazard rate (RHR) of the th component:
(2.3) 
From (2.3) one can write
(2.4) 
Letting , (2.2) becomes
(2.5) 
Taking now the sum for in both sides of (2.5), we obtain
(2.6)  
Consequently,
which combined with (2.5) leads to
(2.7) 
Finally, (2.4) implies
(2.8) 
that is, the relationship of interest between marginal distribution functions and subdistribution functions.
Unfortunately, the expression in (2.8) does not work for the case with jump points. To obtain a version of (2.8) in the presence of jumps, we introduce the following definition and theorem.
Definition 1
For simplicity, consider the case of . The function based on the subdistributions and is
where is integration over disjoint open intervals that do not include the jump points of and is product over jump points of .
The next result, although restricted to , extends expression (2.8) in the sense that it can include disjoint jump points.
Theorem 2
The subdistribution functions and determine (uniquely) the distribution function for by .
An analogous development can be performed for a series system with components, in which and , if . The version of (2.8) for a series system is given by (SalinasTorres et al., 2002):
(2.9) 
in which is the subreliability function for th component.
Unfortunately, the expression in (2.9) does not work for the case with jump points. To obtain a version of (2.9) in the presence of jumps, we introduce the following definition and theorem.
Definition 2
For simplicity, consider the case of . The function based on the subreliability functions and is
where is integration over disjoint open intervals that do not include the jump points of and is product over jump points of .
The next result, although restricted to , extends expression (2.8) in the sense that it can include disjoint jump points.
Theorem 3
The subreliability functions and determine (uniquely) the distribution function for by .
For more details about relations among the distributions and subdistribution functions (subreliability functions) can be found in
SalinasTorres et al. (2002) and Polpo e Sinha (2011) for series system and Polpo e Pereira (2009) for parallel system.
In next Subsection the relations among the distributions and subdistribution functions are presented for a more general class of systems  SPS and PSS.
2.1.2 PSS and SPS
Let and be the lifetimes of three components of an PSS and SPS with marginal distribution functions (DF) , , and , respectively. The restriction here is that the three sets of jump points of , , and must be disjoint. The indicator of the component whose failure produced the system to fail is when , when , and when . Let be the subdistribution function of the th component and the distribution function of the system. The following properties can be proved.
Property 1
The subdistribution functions (SDF) , , and determine the DF of the system,
(2.10) 
Property 2

;

;

;

.
Property 3
The set of jump points and are the same, where . Because , , and have disjoint set of jump points, so have , , and .
Property 4
If for , and 1 for , then is the largest support point of the system.
The lifetime of the SPS is and the system reliability of sindependent components is
(2.11) 
The lifetime of the PSS is and the system reliability of sindependent components is
(2.12) 
Property 5
The SDF of the SPS can be expressed using the marginal DF of the components by
(2.13) 
and the SDF of the PSS can be expressed using the marginal DF of the components by
(2.14) 
Our interest is to obtain the inverse of (5) and (5); that is, to express the DFs , and as a function of the SDF (, , ). These inverses are presented with the following definitions and theorems.
Definition 3
The functions , and based on subdistributions , , and are
The functions (for a series system), and (for a parallel system) are the versions with three components for those presented in Polpo e Sinha (2011) and Polpo e Pereira (2009), respectively. First, Theorem 4 states the relation between and , , and .
Theorem 4
The SDF , , and determine (uniquely) the DF of an SPS for by , and the DF of a PSS for by .
The next definition gives the functions (for the SPS), and (for the PSS).
Definition 4
The functions , and , based on subdistributions , , and , are
Theorem 5
The SDF , , and determine (uniquely) the DF of an SPS for by , and the DF of a PSS for by .
Note that Theorem 5 can be easily rewritten to obtain the relation of DF and the SDF.
Theorem 5 provides an important relation between the SDF and DF, for both SPS and PSS. Using this result, in the next section, we have developed the nonparametric Bayesian estimator for the DF of the system’s components.
2.2 Bayesian Analysis
This section describes a Bayesian reliability approach to SPS and PSS. We have derived a nonparametric Bayesian estimator of the distribution function using the multivariate Dirichlet process
(SalinasTorres et al., 2002). From Property 1, we have that the subdistribution functions are related to the system distribution function by a sum. Considering that , we have the restriction that these four quantities have a sum equal to , and that the set of possible points for is the fourdimensional simplex, or for the nonsingular form. In this case, for a fixed , we have that a natural prior choice is the Dirichlet distribution, and for any , we have the Dirichlet multivariate process. In this Section a nonparametric estimator for the distribution function of the components in an SPS or in a PSS, and using the Dirichlet process, we have a complete distribution for the set . In this case, our parameters are the functions that we want to estimate, giving us a nonparametric framework.
Consider a sample of size and the observed data are , in which for SPS and
for PSS. Besides, if , for and . Equivalently, for each ,the random variables are observed:
in which is a indicator function of set .
The function is empirical subdistribution function of th component. If is the empirical distribution function corresponding to the observations , thus for each ,
For each , let the realization of , in which
In this context, for each , the likelihood function corresponds to the likelihood of a multinomial model being , for , and , that is,
(2.15)  
The prior distribution for is constructed from the characterization of the multivariate Dirichlet process, defined in
SalinasTorres et al. (1997) and it may have the following simplified version.
Definition 5
Let be a sample space, be finite positive measures defined over , and be a random vector having a Dirichlet distribution with parameters . Consider Dirichlet processes, , with , . All these processes and are mutually independent random quantities. Define . The is a Dirichlet multivariate process with parameter measures .
In the context of SPS and PSS, consider , and , for . Then, the vector of components subdistribution functions is and the prior distribution is given by
(2.16) 
Combining the prior distribution (2.16) and the likelihood function in (2.15), the posterior distribution of is, for each ,
Thus, the posterior means of and are given by
(2.17) 
where , and
(2.18) 
These Bayesian estimators are strongly sconsistent. For instance, using the Glivenko Cantelli Theorem (Billingsley, 1985), it can be shown that converges to uniformly with probability 1.
If , the Bayesian estimator of is given by
(2.19) 
Let the distinct order statistics of be . Set , and , . Define
(2.20) 
(2.21) 
(2.22) 
(2.23) 
(2.24) 
(2.25) 
(2.26) 
and
(2.27) 
The main result of this study is given in the following paragraph.
Theorem 6
Suppose that are continuous on , for each , and , , and have no common discontinuities. Then, for , and SPS, we have that
and, for PSS,
, and are the nonparametric estimators of , and , respectively, based on posterior means.
As in Theorem 5, it is straightforward to express the nonparametric estimator of . In the next section, we extend the estimators to a general case of .
2.3 Bayesian estimator for
The extension of the nonparametric Bayesian estimator for SPS and PSS, given in Section 2.2, is based on rewriting the system representation in a proper simplified version of the general case () to the one given with , which has a solution given in Theorem 6. Considering the SPS and the PSS presented in Fig. 2.1, we specify how to rewrite the system representation and estimation of their components reliabilities in the following.
We provided how to estimate (for the SPS), and (for the PSS), because the reliability estimation of the other components are straightforward once these two are given. The idea of the extension is to represent the systems in a simple version with three components (Figures 1.3 and 1.4). In this case, to estimate the reliability of , we use the SPS solution considering , , and (Figure 2.2); and for the estimation of , we use the PSS solution considering , , and (Figure 2.2). It must be noted that other more complex systems can also be considered, but the task is only to simplify the representation of the system as one of either the PSS or SPS given in Figures 1.3 and 1.4.
Furthermore, both the classes (SPS and PSS) are important so as to have a more general solution, because we have the restriction that two different components cannot have the same failure time, which in turn would result in different representations giving more options to the reliability estimation problem. Considering the PSS given in Figure 2.1, we can write their SPS representation as that presented in Figure 2.3. The component’s reliability of the original PSS (Figure 2.1) can be estimated using the PSS result of Theorem 6, which has a simple solution. However, as the SPS representation (Figure 2.3) has some components repeated, the SPS result of Theorem 6 is not applicable. Thus, the solutions for both SPS and PSS are important and can be used in different situations.
2.4 Simulated datasets
This section presents two examples to demonstrate the estimation steps and to show the quality of the Bayesian nonparametric estimator. The estimation steps for the PSS are very similar to those for the SPS, and for the sake of brevity, we have omitted them. The estimation steps for SPS are as follows.

Defining priors: The prior measures () are prior guesses of the SDF (), but it is not simple to elicit these measures. It is easier to elicit the priors for the DF (), and use (5) for the SPS to evaluate the prior measures (for PSS we can use (5
)). We chose the exponential distribution with mean 1 as the prior guess for each of the three components DF. By evaluating the prior measures using (
5), we have , and . Note that this prior is not very informative because the measure of the whole parameter space is only one (). Also, we have that , and . 
Obtaining Posteriors: The posterior processes for the SDF functions are ; and from (2.17), we have