Sequential Selection for Accelerated Life Testing via Approximate Bayesian Inference

01/16/2020 ∙ by Ye Chen, et al. ∙ 0

Accelerated life testing (ALT) is typically used to assess the reliability of material's lifetime under desired stress levels. Recent advances in material engineering have made a variety of material alternatives readily available. To identify the most reliable material setting with efficient experimental design, a sequential test planning strategy is preferred. To guarantee a tractable statistical mechanism for information collection and update, we develop explicit model parameter update formulas via approximate Bayesian inference. Theories show that our explicit update formulas give consistent parameter estimates. Simulation study and a case study show that the proposed sequential selection approach can significantly improve the probability of identifying the material alternative with best reliability performance over other design approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Motivation

Product reliability is often referred to as its ability of performing intended function under specific operating conditions. However, it might take months or years to observe a product failure under the desired operating conditions. Accelerated life test (ALT) is used to collect reliability information in a timely manner under accelerated operating conditions in the lab environment. Then the reliability information collected can be used to predict the lifetime under the normal operating conditions in field environment. Typically, ALT tests experimental units due to the availability of experimental resource. The classical problem of experimental design for ALT is to allocate the stress levels representing accelerated operating conditions to each test unit.

Recent advances in material engineering have made a variety of material settings readily available in lab testing. Among those different settings, the proportions of different elements in the material and mechanical procedures would greatly influence their reliability performance. Thus, the selection of material setting is often critical to the product reliability. In this paper, a new task for ALT is to select the material setting with the best reliability performance. To fulfill this aim, the problem of experimental design for ALT is to determine the stress levels, as well as the material setting of each test unit. As demonstrated in Lee et al. (2018), sequential design is often preferable compared to one-shot designs (i.e., allocating design points for all test units at the beginning stage of the experiments) in terms of improving the efficiency of test planning. The reasons are given as follows. First, testing labs are typically equipped with only a limited number of testing machines (e.g., one or two). Therefore, it is physically impossible to conduct all experiments simultaneously. Second, efficient one-shot design relies on prior estimates of model parameter, and an accurate prior of model parameters is often difficult to obtain before conducting ALT. Particularly, this paper focuses on selecting the optimal material setting, and the advantage of the sequential test planning is to improve the efficiency in optimal decision-making.

To the best of our knowledge, there is no existing work discussing sequential design for optimal material selection under the framework of ALT. We propose a sequential selection approach to allocate experimental design settings to the test units. In each step of this sequential procedure, the experimental design setting for the new test unit is selected to maximize the expected gain on optimizing the reliability performance under a Bayesian log-normal model. For the computational convenience of sequential selection, we develop explicit model parameter update formulas via approximate Bayesian inference. Theories show that our explicit update formulas give consistent parameter estimates. In the next subsection, we point out the connection of our work to literature studies.

1.2 Related Literature

Our paper is closely related to the literature of experimental design for ALTs, as well as the literature on sequential experimental design and learning in simulation optimization. We review state-of-the-art approaches and recent advances from both communities and point out their connections to our paper.

The typical problem in designs for ALT is to allocate accelerated stress levels to experimental test units. The ASTM standard (Standard 2010)

suggests balanced and equally spaced designs for ALT. Given a lower bound and an upper bound of a stress factor, equally spaced design points are chosen. Each design point is applied to an equal number of experimental units. This standard design is developed to reduce the variance of parameter estimates or prediction. To achieve the optimal efficiency in parameter estimation or prediction, optimum test planning strategies have been developed under different model settings, see for examples,

Meeker and Hahn (1977), Meeker and Escobar (2014), Pan and Yang (2014), King et al. (2016). Those optimal design approaches work well if the substituted parameter guesses in the model are accurate. This requirement is often impractical at the early stage of the experimentation. Recently, Lee et al. (2018) developed a sequential Bayesian design approach for ALT to mitigate this drawback, and improve the efficiency in test planning. However, as noted earlier, most of existing experimental design approaches for ALT are developed to assess the reliability performance of a given product or material. In this paper, we focus on selecting the optimal material setting with the best reliability performance. The experimental design issue for this particular problem has not been discussed in the literature to the best of our knowledge.

Selecting the optimal design among different alternatives has been well known as the ranking and selection (R&S) problem in the simulation community, which can date back to Bechhofer (1954). In such problems, the experiment is usually under the limit of a fixed budget (for example, time, materials), and the decision-maker wants to identify the optimal design correctly as much as possible. See Hong and Nelson (2009) and Chau et al. (2014) for more description. For the R&S problem, we say “correct selection” occurs if the selected alternative is truly the best design after the simulation budget is exhausted. The optimal budget allocation with respect to maximizing the probability of the correct selection is studied rigorously in Glynn and Juneja (2004). However, this optimal budget allocation requires certain knowledge of the designs and thus can not be applied directly in practice; for more details, see the discussion in Chen and Ryzhov (2019b). Therefore, modern researchers prefer to allocate their budget in a sequential manner, which is more practical and computationally tractable. In such sequential allocation algorithms, the decision-maker first spends part of the budget, observes the results, then determines how to allocate the remaining budget accordingly. There are many sequential allocation algorithms that have been proposed, including expected improvement (or EI; see Jones et al. 1998), optimal computing budget allocation (or OCBA; see Chen et al. 2000), indifference-zone method (Kim and Nelson 2001), top-two methods (Russo 2017). The EI-type methods also include Chick et al. (2010), Powell and Ryzhov (2012), Qin et al. (2017), L. Salemi et al. (2019). Other approaches include the reverse-engineering method with brutal force (Peng and Fu 2017). Though various sequential allocation algorithms have been proposed, there is no previous work that applies them to material selection in ALT, where usually we encounter censored observations from experiments, as discussed later in Section 2. To overcome the inconvenience brought by the incomplete information, our work builds an approximate Bayesian model to learn the reliability performance of the materials, which allows us to apply the sequential allocation algorithms more efficiently in ALT.

1.3 Overview

The rest of the article is organized as follows. Section 2 provides detail description of our problem. Section 3 investigates the approximate Bayesian inference approach for the log-normal model and its corresponding theoretical properties. Section 4 develops the design criterion for sequential selection. Section 5 compares the proposed approach with other test planning approaches using numerical examples. Section 6 concludes the paper with discussion and future directions.

2 Problem Description

ALT mostly considers different levels of the stress factors in testing and validating the reliability performance of a given product or material, which is often characterized by a lifetime model. In our problem, both stress factors and material features of the product are included in the test planning stage. The stress factors are denoted by a

dimensional vector

, whereas the material features are denoted by a dimensional vector . The stress factors are usually numerical variables providing the accelerated stress levels, such as temperature and humidity. The entries of the material feature vector

can be continuous variables indicating the key metrics of material characteristics, and they can also be categorical variables referring to different material types. For example, the material features may include the composition percentage of different elements in an alloy, as well as different types of metallurgical procedures (e.g., annealing, tempering, electroplating, etc.) used to process materials.

We assume that the mean performance of material reliability can be expressed by as a function of stress factors and material features with an unknown parameter vector . A higher value of indicates that the corresponding material setting leads longer material lifetime in average under the stress level combination . Therefore, the goal of our problem is to find the material alternative which leads the best mean reliability performance under the target stress levels :

(1)

where is a set of candidate material settings in our experiments.

Since the testing process (e.g., the material wear process as in Section 5.2) can be extremely complex, it is almost impossible to develop an accurate mathematical model for the mean material lifetime under multiple stress factors and material features. To solve this problem, a log-normal model is often used to surrogate the material lifetime (Meeker and Escobar 2014):

(2)

where

is a random variable representing the lifetime of a test unit with experimental setting

,

is the error term following a normal distribution with mean zero and variance

, and collects the intercept, the stress factors , the material features , and the interactions between material features and stress factors. In particular,

(3)

where denotes the Kronecker product of and , which is a dimensional vector representing the interaction between material features and stress factors. To simplify the notation, we reduce to when there is no confusion. The linear coefficient is a dimension vector. After collecting life times ’s from test units , the model parameters can be estimated via the maximum likelihood method.

In reliability studies, the lifetime ’s are often given as the censored observations. Even under accelerated stress levels, the lifetime of a test unit can be as long as weeks or months. Thus, in the experimental stage, the tests will be terminated after a given observation time , even if the failure has not been observed. In additional to , the failure of the

-th test is often recorded by a binary variable

. If , failure is observed, and is the lifetime of the -th test unit. If , we only know that the lifetime is greater than . Under the assumption of the log-normal model in (2), the likelihood function of and is

(4)

where and

are the probability density function and the cumulative distribution function of the standard normal random variable, respectively.

Under the linear model setting, it is critically important to develop efficient experimental design approach to solve the optimization problem in (1). Since our goal is to find the optimal material setting more efficiently, we develop a sequential optimal learning framework for ALT. Without loss of generality, we assume that the test lab is only equipped with one set of test machine. Thus, in each step of this sequential procedure, we only select one design point and allocate it to one test unit. The collected reliability information is used to update our belief regarding to the mean lifetime, and our belief regarding to the mean reliability performance of different material settings is used to determine the design for the next test unit. There are two main tasks under this development: 1) how to update the beliefs regarding the mean reliability performance of different material settings under the linear model setting with censored observations; 2) how to develop experimental design criterion to select new design points at each step. In this paper, we first develop the updating formula for our belief of the mean lifetime in Section 3, and then develop a policy to allocate experimental setting based on the updated belief in Section 4.

3 Approximate Bayesian Inference for Log-normal Model with Incomplete Observations

In this section, we develop Bayesian update formulas for the log-normal model in (2). Under the linear model setting in (2), we assume that the prior of the linear coefficients is a multivariate normal distribution with mean and variance matrix . If the lifetime is not censored, the conjugacy property of the multivariate normal distribution also leads to a multivariate normal posterior distribution of . For , we denote and as the mean vector and variance matrix of the posterior distribution of after including observations from the first test units. It is straightforward to derive that

(5)

and

(6)

where is the design point of the -st test unit, is the logarithm lifetime observation, and is the variance of the error term in (2). In our development, we assume that is known for notational convenience.

Notice that, the conjugacy property gives closed-form parameter update formulas, which further enables convenience in the development of sequential experimental policies. See for examples in Frazier et al. (2008) and Frazier et al. (2009)

. However, the conjugacy property does not hold if we have censored responses. An alternative method of constructing closed-form parameter update formulas under this situation is the moment-matching based approximate Bayesian inference. This method has been used to develop Bayesian ranking and selection approaches under a multivariate normal setting in

Zhang and Song (2017), and its statistical consistency has recently been investigated by Chen and Ryzhov (2019a). For our problem, the idea of approximate Bayesian inference is to approximate the posterior distribution of as a multivariate normal distribution with mean and variance , which are the first and second moments of the posterior distribution of given that , i.e., . The approximate Bayesian update formula is given in Proposition 1.

Proposition 1.

Assume that, at the -st step, we observe and . Under the log-normal model, and the multivariate normal prior , the approximation Bayesian inference gives closed-form update formulas:

(7)

and

(8)

where

(9)

and and are the first and second moments of the posterior distribution of given that .

If material failure is observed, (6) indicates that the variance reduction is . Also, if there is a censored response, the amount of variance reduction will be reduced by as in (8). However, in sequential update, the effects of this additional term to the variance reduction is usually negligible. This is because that the variance is small when is large enough. Our numerical results often show that the variance update formulas in (6) and (8) lead to approximately equal variances. Therefore, in terms of the variance update, we adopt (6) for both complete and censored responses. As a result, the update formulation at step can be summarized by

(10)

with given in (9).

We now discuss the consistency property of the proposed approximate Bayesian inference under incomplete observations. In the following context, we demonstrate the convergence of the sequence based on the framework established in Chen and Ryzhov (2019a). We make the following assumptions:

Assumption 1.

The design vectors are drawn i.i.d. from a common distribution satisfying , where is a positive definite symmetric matrix.

Assumption 2.

The sequence satisfies almost surely.

Theorem 1.

Suppose Assumptions 1-2 hold and the sequence is bounded, and suppose that and are updated using (20)-(21) respectively. Then, almost surely.

The proof of this Theorem is deferred to the Appendix. This theorem indicates that although we approximate the posterior distribution to a multivariate normal under censored observations, the approximation can be asymptotically accurate, since the updated parameter sequence converges to the true model parameters.

4 Sequential Selection for Reliability Improvement

This section discusses how to select design points in a sequential manner. As mentioned earlier, we investigate a fully sequential procedure, and assume that only one experimental unit will be allocated in each step of the sequential procedure. Recall that our goal is to determine the material feature combination such that it has the best reliability performance under the target stress factor levels . At the -th step of the sequential procedure, the optimal material setting based on the collected information can be expressed by

(11)

where represents that the expectation is taken with respect to the prior distribution of at the -th step. Under the log-normal model setting in (2), the objective in (11) can be simplified to

with given in (3). To meet the requirement of our goal in (1), new design points in each step should be determined to maximize the improvement the target optimization problem. The improvement of the objective in (1) by adding new design points in the -st step can be quantified by

(12)

Since is a random vector that depends on the selected design points , the -st design point should be chosen to maximize the expectation of the value of improvement given that is the design point at the -st step. Therefore, the acquisition function to select the new design point can be expressed by

(13)

where the expectation is taken with respect to the posterior predictive distribution of

given that and are the -st design point. This EI-type acquisition function is typically used in selecting design points for optimization problem in a sequential manner, see Powell and Ryzhov (2012) for examples of the EI-type acquisition function under different developments.

For our problem, (13) can be further simplified. Since with non-censored response is given by (5), we have that

Under the log-normal model and the prior distribution of , it is straightforward to derive that the posterior predictive distribution of is a normal distribution with mean and variance . Therefore, we can express

(14)

where is a standard normal random variable.

We denote . Then , where with and being vectors of size and , respectively. Accordingly,

(15)

and

(16)

Plugging (15) and (16) into (13), we obtain that

(17)

where the expectation is taken with respect to the random variable . The new design point is selected to maximize this acquisition function.

For our problem, the number of candidate material settings in is often finite, say, . Under this situation, has a closed-form expression according to Frazier et al. (2009). Let

for . For notational convenience, we assume that for . Following Frazier et al. (2009), we have that

(18)

where . To maximize , we can compute its gradient with regard to according to Zhang and Hwang (2019), and use gradient based optimization approaches to find the maximum of for each given .

Notice that, the EI-type sequential design criterion in (13) may not lead to a closed-form expression as in (18) if the posterior of the coefficients is not a multivariate normal distribution in each step. The proposed approximation Bayesian update in Section 3 guarantees that the multivariate normal posterior distribution holds. Besides convenient and efficient model update, the proposed Bayesian approximation also plays an important role in simplifying the computation of sequential design selection.

5 Numerical Study

This section provides synthetic examples and a case study on accelerated wear testing to compare the numerical performances of different model updates and experimental design approaches. In terms of model updates, we compare the proposed approximation Bayesian update formulas in (10) with the exact update, i.e., refitting the log-normal model using all the data points, which does not possess tractable parameter updating formulas. Those two alternatives approaches are denoted by “approx” and “exact”, respectively. We also consider the following experimental design approaches:

  • (Design:) Full factorial designs, see for example, Wu and Hamada (2011).

  • (SeqD:) Sequential Bayesian D-optimal Design in Lee et al. (2018).

  • (SeqEI:) The EI-based sequential design procedure described in Section 4.

We consider all possible combinations of the two model update approaches and the three experimental design approaches. The six alternatives involved in our numerical comparison are denoted by “Design approx”, “Design exact”, “SeqD approx”, “SeqD exact”, “SeqEI approx”, and “SeqEI exact”, respectively.

Notice that, the EI-type sequential design criterion in (13) may not lead to a closed-form expression as in (18) if the posterior of the coefficients is not a multivariate normal distribution. For “SeqEI approx”, our model (the posterior distribution of ) can be represented by a multivariate normal distribution completely, based on the proposed approximation Bayesian update in Section 3. Thus, the proposed Bayesian approximation also plays an important role in simplifying the computation of sequential design selection. However, under the exact model update, the implementation of this EI-type sequential design criterion is impractical, since it may require MCMC to approximate the value of (13) for each candidate and at each step. In our implementation of “SeqEI exact”, we process the model and the experimental design selection under two separate tracks: the design criterion in (18) is obtained under the proposed approximate model update (the same as in “SeqEI approx”), whereas the collected data points are used to refit the exact model and determine the optimal material setting according to (11) at each step. In this way, we can evaluate the effects of model update and sequential design separately.

The full factorial designs are one-shot designs, which are not originally developed for a sequential experimentation. To compare the full factorial design under a sequential manner, we make it adaptable for a sequential procedure. First, we generate a full factorial design with respect to the number of levels of the material feature factors and the stress factors. Since the total number of steps is usually greater than the run size of this full factorial design, we replicate the runs in the full factorial design one by one to make total run size equal to (i.e., the runs in original full factorial design may not have exact equal number of replications). Finally, we randomize the order of the design within the runs, and let them enter the sequential procedure one by one.

The goal of our problem is to choose the material setting with the best reliability performance. In practice, we often consider a finite number of material settings. Thus, we consider discrete levels of the material factors, and use probability of correct selection at the target stress level to evaluate different approaches. According to (1) and (11), the probability of correct selection can be expressed by , where the probability is taken with respect to , which is a random variable due to the randomness of collected responses. In our numerical study, the probability of correct selection is estimated empirically by

(19)

where is the total number of replications, is an indicator function, and is the selected optimal material setting at the -th step from the -th replication. In the synthetic examples and the case study, we use to compute the estimated probability of correct selection. In all of our numerical examples, we set the observation time in (4) to be a constant.

5.1 Synthetic Examples

In this study, we directly generate data from the log-normal model in (2). The stress factor contains three dimensions. For each dimension, the design points of the accelerated lab experiments are taken value from , whereas the targeted environmental condition is specified to be 0.1. For the material factors, we generate one factor with

levels. The first level of this material factor is specified to be optimal with the best reliability performance in average. We generate four random variables from uniform distribution

to be the linear coefficients corresponding to the intercept and each of three stress factors. The generated four dimensional linear coefficients are denoted by a vector . The linear coefficients of each remaining material level are generated by for , where each component of is a uniform random variable from -1/30 to 0. This setting guarantees that the first level of the material factor has the best reliability performance in average, and the average lifetime decreases as stress factor levels increase. A total number of 100 replications is used to estimate the probability of correct selection as in (19). For each replication, we generate 20 data points for each material setting to obtain the prior distributions for the linear coefficients.

Figure 1: The estimated probability of correct selection for different settings with .
Figure 2: The estimated probability of correct selection for different settings with .

In Figure 1, we consider a case with only two material settings, i.e.,

. We generate the responses under different signal to noise ratios. The signal level (i.e., the value of coefficients) is fixed as described earlier. The value of standard deviation

in (2) is set to be 0.2 or 0.1, and resulted value of “Signal/Std” is 0.15 as in the top panel of Figure 1 or 0.3 as in the bottom panel of Figure 1. The value of the constant observational time in (4) is set to be 1 or 1.2 to generate different levels of response censor rates. As shown in Figure 1, the censoring rate is around 15% if (left panel), whereas the censoring rate is around 30% if (right panel). Under a similar setting, we show the results of a scenario with six material settings (i.e., ) in Figure 2.

The results in Figures 1-2 show that “SeqEI” based approaches give the highest probability of correct selection. Since the design criterion of “SeqEI” is developed to improve the optimization problem in (1), it outperforms “Design” and “SeqD”, both of which aim for reducing the variances of model coefficients. We also see that, “approx” approach does not perform well if the censoring rate is high (say, around 30%). It demonstrates that the efficiency of the proposed approximate model updating approach can deteriorate if there is a significant large portion of censored observations. Overall, “SeqEI exact” gives the best performance, and the performance of “SeqEI approx” is competitive to the best when the censoring rate is low. For challenging scenarios (e.g., “Signal/Std=0.15” or ), “SeqEI” based approaches demonstrate obvious advantages compared to other design approaches.

5.2 A Case Study on Accelerated Wear Tests

We consider a material wear test of copper alloys as an example to demonstrate the performance of the proposed sequential selection method. Because of high strength and exceptional bearing properties of copper alloys, they are widely considered in various safety-/mission-critical industries, e.g., aircraft bearings and bushings in aerospace industry, drilling and mining equipment in mining industry. This case study considers the reliability performance of Cu-Ni-Sn alloys in the accelerated wear tests. This study investigates two types of material specimens, namely as-received Cu-Ni-Sn and annealed Cu-Ni-Sn specimens. Due to the annealing process, the microstructures as well as physical/chemical properties of annealed Cu-Ni-Sn specimens will be altered as compared to the as-received ones. Thus, their reliability performances may differ accordingly. The experimenter is interested in finding the material with better reliability performance. Wear tests were carried out using a Koehler K93500 pin-on-disc tester under various environmental conditions of “Load”, “Temperature” and “Humidity” (Singh et al. 2007). For each testing unit of Cu alloy specimens, in-situ monitoring outputs of wear performance (e.g., wear depth in m) are measured over time by a linear variable displacement transducer. A material failure is recorded if the material weight loss is above a given threshold value. Historical data contains the information of the wearing processes of 18 experimental units.

The experimental observations of all 18 experimental units are provided for our study. Unfortunately, follow-up experiments are not available to validate the proposed sequential design approach. Therefore, to implement the sequential selection procedure, we develop a pseudo simulator to model the historical data. This pseudo simulator is built on a Gaussian process model. Under this pseudo simulator, the log response is not a linear function of the material factor and stress factors. We are able to investigate the robustness of the proposed approach under this nonlinear setting. The goal of this case study is to choose the materiel option that maximizes the reliability performance. According to the evidence shown from the data and domain knowledge, we identify that as-received Cu-Ni-Sn alloy is more reliable than annealed Cu-Ni-Sn alloy. With this information, we are able to estimate the probability of correct selection as in (19). In this study, we consider that the observation times equal to 200, 300, and 500 to generate different censoring rate of the responses.

The results of different approaches are shown in Figure 3. The censoring rates corresponding to observational times 200, 300, and 500 are 45.1%, 34.7%, and 29.7%, respectively. Similar to the results from Section 5.1, “SeqEI exact” gives the best performance in general, and the performance of “SeqEI approx” is competitive to the best when the censoring rate is low.

Figure 3: The estimated probability of correct selection for the case study.

6 Conclusion

This paper proposed a sequential test planning approach to determine the most reliable material setting in accelerated lab experiments. To guarantee a tractable statistical mechanism for information collection and update, we develop explicit model parameter update formulas via approximate Bayesian inference. We demonstrate the advantage of our proposal through theoretical results and numerical studies. Now we remark on the directions for future research. First, we assume that the observation times for each experimental unit is given in this paper. It is more practical and efficient to determine the observational time for each test unit based on existing experimental results. The decision of allocating observation time to each test unit can be more critical when there is a deadline to complete all experiments. Second, this paper considers a single operation condition. In some other studies, the target levels of the stress factors might be different under different practical situations, each of which might be prone to different material settings. It is interesting to extend our work to this personalized optimization scheme, and develop a sequential selection procedure to choose the optimal material settings for each individualized environmental situation.

Appendix A Proof of Proposition 1

First of all, according to the assumption of the log-normal model, we have that

Then follows a truncated normal distribution (see for an example, Johnson et al. (1970)), and its mean and variance are given by

and

According to (5) and (6), we have that

Therefore, the posterior mean and variance of given can be derived by

and

Appendix B Proof of Theorem 1

From law of large number, Assumptions

1-2 lead to almost surely. Furthermore, denote , then by Lemma EC.2 in Chen and Ryzhov (2019a), we have the following result on the convergence rate of .

Lemma 1.

Suppose Assumptions 1-2 hold, then, with probability 1,

This lemma will be used in the proof of Theorem 1.

In the remaining of this proof, we assume that a suitable set of measure 0 is discarded, so we don’t have to repeat the qualification “almost surely”. Notice that, according to the Woodbury matrix identity (Woodbury 1950), the updating formulas in (10) can be expressed by

(20)
(21)

where is expressed in (9). The development of the proof will be based on the expressions above.

Without loss of generality, let . Denote

Then, (20) is equivalent to

Taking the -norm, we have

(22)

From (21), we have

(23)

Define the Borel sigma-algebra

Since is normally distributed and , by (23) and Assumptions 1-2, there must exist a positive constant such that for all ,

(24)

Similarly, with triangular inequality, there must also be a constant such that

(25)

By Cauchy-Schwarz inequality, from (24) and (25), we have

(26)

We can also find that