Effective Model Calibration via Sensible Variable Identification and Adjustment, with Application to Composite Fuselage Simulation

12/29/2019 ∙ by Yan Wang, et al. ∙ Georgia Institute of Technology Boeing Virginia Polytechnic Institute and State University Beijing University of Technology Texas A&M University 0

Estimation of model parameters of computer simulators, also known as calibration, is an important topic in many engineering applications. In this paper, we consider the calibration of computer model parameters with the help of engineering design knowledge. We introduce the concept of sensible (calibration) variables. Sensible variables are model parameters which are sensitive in the engineering modeling, and whose optimal values differ from the engineering design values.We propose an effective calibration method to identify and adjust the sensible variables with limited physical experimental data. The methodology is applied to a composite fuselage simulation problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Composite parts have been widely used in various industrial applications including aerospace, automotive, energy industries due to their superior properties including high strength-to-weight ratio, high stiffness-to-weight ratio, potentially long life usage and low life-cycle cost (Mallick, 2007). Dimensional variation modeling of composite parts lays a foundation for quality control and process improvement, and numerous studies have been conducted in this area. Literature review related to composite parts assembly and variation modeling refers to Shi (2006), Zhang and Shi (2016a, b) and Yue et al. (2018). Most of these studies in variation modeling and analysis of composite parts are based on the finite element analysis (FEA). FEA is a numerical simulation method and widely used in structural analysis, fluid dynamics, heat transfer, and electromagnetic potential analysis. Therefore, obtaining an accurate finite element model is a fundamental step for subsequent variation analysis and quality control of composite fuselage assembly.

Finite element modeling of composite fuselage is a challenging task due to the compliant nature and anisotropic characteristics of composite structures (Jones, 1998; Barbero, 2013). Many model parameters, such as material property parameters, multiple layer thickness, and constraints on composite parts, make important impacts on the accuracy of the computer model. Although the values of these parameters are available according to engineering design knowledge, the actual values of those parameters in the real fabricated fuselage are unknown because of inevitable variabilities in the manufacturing system, such as part fabrication errors, fixture errors, and positioning errors.

The present research is motivated by a composite fuselage dimensional simulation. Our goal is to find the optimal values of the model parameters, under which the finite element outputs match the structural load experimental observations of the composite fuselage. In the area of computer experiments, identification of unknown model parameters based on computer simulation outputs and physical experimental observations is referred to as calibration of computer models. There has been a vast amount of literature discussing the calibration methodologies and their applications since the pioneer work by Kennedy and O’Hagan (2001) appeared. To name a few authors, see Kennedy and O’Hagan (2001), Higdon et al. (2004), Bayarri et al. (2007), Joseph and Yan (2015), Tuo and Wu (2015), and Gramacy et al. (2015). Calibration of computer models has been applied successfully in many areas, like the research of hydrocarbon reservoir (Craig et al., 2001), the electrical activity of myocytes in cell biology (Plumlee et al., 2015), the diffusion of radionuclides released during nuclear bomb tests (Pratola and Higdon, 2016) and so on.

In a typical computer experiment problem, the corresponding physical experiment is expensive to run, and this is why we need the help of the computer simulator. Thus, the physical experimental sample size is normally rather limited. On the other hand, many computer simulators have a number of calibration parameters. In many problems, simultaneously estimating all calibration parameters from the data can be intractable due to the curse of dimensionality. In the composite fuselage simulation problem discussed in Section 5, there are quite a few calibration parameters, such as material property parameters, multiple layer thickness, support parameters, fabrication angle, and temperature. However, only eight physical experimental observations are obtained because of the time and cost constraints.

To break the curse of dimensionality, we resort to the engineering knowledge to obtain a sparse model for the calibration problem. In most calibration problems in engineering, engineers have some expert knowledge of the model parameters using the information from the design process. Based on the knowledge, engineers can give some engineering values for the calibration parameter. Because the engineering knowledge is likely to be reliable, most engineering values are close to their corresponding optimal values and only a subset of the model parameters need to be adjusted. We call them the

sensible calibration variables, or abbreviated as sensible variables. Adjusting the sensible variables can significantly improve the accuracy of the simulation model. The sensible variables differ from the sensitive variables which have been widely considered in engineering problems. The related area is known as sensitivity analysis (Shi, 2006).

In this paper, we propose an effective method to identify and adjust the sensible variables. A loss function of the projected kernel calibration method introduced by

Tuo (2019) is used to measure the model fitness, and an -type penalty (Tibshirani, 1996) is used to encourage a minimum adjustment for the calibration parameters. We employ an approximation technique and the objective function becomes convex and thus can be solved efficiently. The proposed method proceeds by identifying and adjusting the sensible variables, which is particularly useful when the physical experimental observations are limited. The performance of the proposed method is shown in numerical studies. We apply the proposed method to the calibration of the finite element model in the composite fuselage simulation problem.

The remainder of this paper is organized as follows. In Section 2, we review some existing calibration methods related to the current work. In Section 3, we propose an effective model calibration via sensible variable identification and adjustment. In this section, we introduce the concept of sensible variables and discuss their properties. We also illustrate how the proposed method can identify and adjust the sensible variables. In Section 5, we study the finite element model calibration problem of composite fuselage using the proposed method. Concluding remarks and further discussion are given in Section 6. The data from the composite fuselage simulation problem and the R code are provided in the supplementary material.

2 Review on Calibration of Computer Models

Calibration of computer models has attracted considerable attention in the literature. In a calibration problem, the input of the computer model consists of two types of variables: the control variables and the calibration parameters . The control variables are variables that can be controlled in the corresponding physical experiments. The calibration parameters are variables that are involved in the computer models but their values cannot be controlled or measured in the physical experiments. Normally, calibration parameters represent certain inherent attributes of the physical system. We refer to Kennedy and O’Hagan (2001) for more discussion about the calibration parameters. Denote the set of design points for the physical experiments by , and the corresponding responses by . A commonly used model for the physical experimental observations is

(2.1)

where . is called the true process, which is an unknown function, and ’s are the observation errors following with unknown . Kennedy and O’Hagan (2001) claim that the computer outputs cannot perfectly fit the physical experimental observations because the computer outputs are biased: the computer models are usually built under assumptions and simplifications which are not exactly correct in reality. Taking this model bias into account, one could use the following model (Kennedy and O’Hagan, 2001; Higdon et al., 2004)

(2.2)

where denotes the combination of the optimal calibration parameters, denotes the computer model and is the discrepancy function. The goal of calibration is to estimate , so that the computer outputs are close to the physical experimental observations. However, equation (2.2) is not enough to fully determine , because the function is also unknown. This problem is known as the identifiability issue of Kennedy and O’Hagan’s method.

To resolve this identifiability problem, Tuo and Wu (2015) define as

(2.3)

i.e., minimizes the distance between the true process and the computer outputs. Plumlee (2016) observes that (2.3) is generally equivalent to the orthogonality constraints

(2.4)

for . Based on (2.4), Plumlee (2016) proposes a Bayesian calibration method which rectifies the identifiability problem of Kennedy and O’Hagan’s model.

Inspired by the work of Plumlee (2016), Tuo (2019) suggests a frequentist method to estimate the calibration parameter. First, we choose a positive definite kernel function . A common choice is the Gaussian correlation family with

(2.5)

for some . The projected kernel of according to the constraints (2.4) is defined as

(2.6)

with

(2.7)

Tuo (2019) proposes the following projected kernel calibration estimator

(2.8)

where , ,

denotes the identity matrix and

is a tuning parameter which can be chosen using the Bayesian method suggested by Chang and Joseph (2014). The scale parameter can be estimated by the maximum likelihood (ML) method, i.e., in (2.8) we maximize the objective function with respect to both and . Tuo (2019) proves the consistency and efficiency of the projected kernel estimator.

Because of the high cost of physical experimental runs, only a small number of physical experimental observations can be obtained. Thus, only a few model parameters can be adjusted using a data-driven method. However, in many engineering problems, the number of model parameters can be large so that not all of them can be adjusted effectively. In the composite fuselage FEA simulation problem in Section 5, the computer model has one control variable (actuator force) and five calibration parameters, including surface body thickness, support parameter, material thickness ratio, fabrics orientation, and temperature. However, only eight physical experiments can be conducted by adjusting actuator forces to collect physical experimental observations. Similar challenges of parameter estimation occur in many composite parts finite element model calibration problems in the areas of aerospace, automotive, energy industries, etc. Clearly, existing calibration methods will suffer from the curse of dimensionality.

To break the curse of dimensionality, we will introduce the concept of sensible variables in Section 3. An effective calibration method will be proposed to solve the calibration problems with limited physical experimental observations.

3 Methodology

In this section, we introduce a novel methodology which tackles the curse of dimensionality in calibration problems with the help of engineering design information. A variable selection and estimation procedure is performed by identifying a subset of the calibration parameters which need to be adjusted. Such calibration parameters are called sensible variables. The most common definitions of “sensible” are: practical, reasonable, logical, rational, etc. Such as sensible heat is literally the heat that can be felt. It is the energy from one system to another that change the temperature. Sensible heat is used in contrast to latent heat, which is the heat changing the phase. In the proposed method, by identifying and adjusting these sensible parameters, the performance of the computer model will be improved, that is the discrepancy between the computer model and the physical experimental observations will be reduced.

3.1 Sensible Variable

In a typical engineering problem, initial guesses of the model parameters are usually available. These values are generally obtained using the engineering design information or the domain expert knowledge. We call them the engineering design values. Ideally, the physical properties of a product should be consistent with its engineering design values. This consistency, however, may be violated in practice due to certain inevitable variability in the manufacturing system, such as fabrication errors, fixture errors. In the composite fuselage simulation, although an ideal engineering design value of the surface body thickness is given, the actual thickness of a specific batch of the fabricated fuselage is still unknown because of fabrication errors. From engineering design, the fabrics orientation should be 45 degrees but fabrication uncertainty may also exist.

Recall that calibration of model parameters may suffer from the curse of dimensionality when the input dimension of the calibration parameters is relatively high. Fortunately, our engineering knowledge suggests that we can reasonably assume that most of the calibration parameters can be set as their engineering design values, because the quality of the product is generally well controlled. Thus, only a small number of the model parameters need to be adjusted. We call these variables as the sensible calibration variables, or abbreviated as sensible variables. For the remaining calibration parameters, which do not need to be adjusted, we call them insensible variables.

It is worth noting that sensible variables differ from sensitive variables. The latter has been widely used in engineering modeling, which are model variables that have significant influence on the simulation outputs. Sensible variables must be sensitive. If a model parameter is insensitive, the right hand side of (2.3) is largely unaffected by this parameter, and the optimization with respect to this paramter in (2.3) does not make sense. On the other hand, sensitive variables are not necessarily sensible. When the engineering design value of a model parameter is close to its optimal value, we do not need to adjust this parameter, even if it is sensitive.

Denote the engineering design values as . We summarize the proceeding discussion on the relationship between sensibility and sensitivity in Table 1:

Variable Type Description Needs Adjustment?
Insensitive has less influence on the simulation outputs No
Sensitive but insensible is sensitive and No
Sensible is sensitive and Yes
Table 1: Sensibility versus sensitivity.

In the next two subsections, we will propose a statistical method to identify and adjust the sensible variables.

3.2 Surrogate Modeling of Computer Models

Before introducing the proposed calibration methodology, we consider the surrogate modeling of the computer outputs with many calibration parameters. In the fuselage simulation, each run of the FEA code is time consuming. Thus it is unrealistic to run the code as many times as we want. In order to make statistical inference about the model parameters, we need to reconstruct the computer output response surface based on computer code runs over a set of designed input settings. Specifically, we choose a set of design points and run the computer code over each point in this set. Space-filling designs are commonly used for this purpose in the computer experiments literature. We refer to Santner et al. (2013) for a review of computer experiment design methods. Based on , we build a surrogate model , which serves as an approximation to .

Recall that and . Assume that is nearly linear in . We apply a Taylor expansion to at and obtain

(3.1)

We assume that the residual term is negligible. Then we can consider the surrogate model with the form

(3.2)

where and can be regarded as estimates of and respectively. It is worth noting that and are independent of the parameter . This implies that the complex computer models have been approximated by models in linear form.

Another advantage of surrogate model (3.2) is that is independent of . As a result, the matrix defined in (2.6) is also independent of and thus (2.8) becomes a convex optimization problem. In Section 3.3, we will use (3.2) and formulate the penalized calibration as a convex optimization problem which can be solved efficiently.

Now we consider how to construct the surrogate model (3.2

) given the data from the computer experiment. A simple method is to impose parametric models on

and . For instance, we may suppose that and are linear in , and then the parameters can be estimated using the least squares method. We will adopt this parametric method in the case study discussed in Section 5.

A more flexible method can be consider, by using Gaussian process modeling to construct and . Write . Then we consider the following Gaussian process surrogate model

(3.3)

where ; are modeled as realizations of independent Gaussian processes , respectively; and

are mutually independent nugget variables following the normal distribution

with unknown . The simplest Gaussian process models for and are the simple kriging models, in which we assume have zero mean and covariance functions

respectively, with unknown variances

and known correlation functions

. One may introduce more degrees of freedom to these simple kriging models so that more complex functions can be fit. Such extensions are fairly straightforward and have been widely discussed in the literature. See

Santner et al. (2013); Banerjee et al. (2004) for example. The nugget term is necessary. This is because the partial linear Gaussian process model is only an approximate to the true function and thus

may not be able to interpolate

for all .

The functions and can be estimated using the standard conditional inference technique for Gaussian process surrogate models. Conditional on the observed data and the model parameters, the expectations of and are

(3.4)

and

(3.5)

respectively, where , , , and denotes the identity matrix. We can choose and in (3.2) as (3.4) and (3.5) respectively. The model parameters can be estimated using the maximum likelihood method or the Bayesian methods. For the parameter estimation for Gaussian process models, we refer to Santner et al. (2013); Banerjee et al. (2004). A related Gaussian process surrogate model is considered by Ba and Joseph (2012).

3.3 Penalized Orthogonal Calibration

Recall that the calibration parameter is . As discussed in Section 3.1, the key to break the curse of dimensionality is to adjust only the sensible variables, which is an unknown subset of

. As before, we denote the engineering design parameter values by the vector

, and the optimal calibration parameter by (2.3). The goal of this paper is to propose a novel calibration method to identify and adjust the sensible variables. First we consider the estimator defined by the optimization problem

(3.6)

where and are the same as (2.8); is a known weights vector; is a tuning parameter; and .

The objective function of (3.6) consists of two components. The first part is

(3.7)

which is the objective function of the projected kernel calibration method (2.8). We call this term the model loss, because it captures the discrepancy between the computer outputs and the physical experimental observations. The second term is an adaptive-lasso-type penalty. It is a weighted multiple of the distance between the current calibration parameters and the engineering engineering design values, denoted by

(3.8)

In the statistical analysis, -type penalties are widely used because they encourage sparse solutions to break the curse of dimensionality (Bhlmann and Sara, 2011; Tibshirani, 1996). Similar with Zou H. (2006), we suppose the weight , if and , if , where .

In the current context, a sparse means that for some or most ’s. The degree of sparsity is determined by the tuning parameter . If , the proposed method goes back to the projected kernel calibration, which gives a non-sparse solution. When goes to infinity, the proposed method gives a fully sparse solution with . In practice, one should make a suitable choice of to balance the model fitting and the model complexity. Cross-validation is widely used in identifying the tuning parameter (Friedman et al., 2001). However, the adaptive lasso-type variable selection models often includes too many variables when selecting the tuning parameter by cross-validation method. Wang et al. (2007a) and Wang et al. (2007b) demonstrated that adaptive lasso can identify the true model consistently when the tuning parameters is selected by a BIC-type criterion. As a result, in order to better convey the characteristics of the proposed method, we select using the BIC-type criterion:

(3.9)

where is the model loss; ; is the estimate for some chosen value of .

As outlined in Section 3.2, the computer code is expensive to run and thus cannot be regarded as a known function of . Therefore, it is infeasible to solve (3.6) directly. To obtain a computationally efficient estimator, we replace the computer response surface with the surrogate model introduced in (3.2). According to (2.6)-(2.7), the projected kernel matrix in (2.6) now becomes

(3.10)

with

(3.11)

Then the estimator (3.6) becomes

(3.12)

where . We call the estimator defined by (3.12) the penalized orthogonal calibration, and write it as .

Because is a positive semi-definite matrix, (3.12) is a convex optimization problem. We can apply the existing methods such as the NEWUOA algorithm (Powell, 2006) to solve this problem efficiently. In this paper, we use the Monte Carlo method to approximate the integrals in (3.11).

To justify the proposed method, we examine whether the three types of calibration variables in Table 1 would be adjusted by the method in the desired manner. we call the first part of (3.12), given by

(3.13)

the empirical model loss. First we consider insensitive variables. Suppose is insensitive. We also suppose that our surrogate model should be reasonably accurate. Then by definition, should be (nearly) independent of . Consequently, is also independent of . In this case, (3.12) reduces to minimizing , which gives the answer . Thus the penalized orthogonal calibration leaves unadjusted. If is sensitive but insensible, according to the consistency of the projected kernel calibration (Tuo, 2019), the minimizer of with respect to should be close to . In this situation, the penalty will regularize and set . If is sensible, does not give a good model fitting. Then for a suitably chosen , the partial derivative of at has a magnitude greater than , which is the maximum subgradient of with respect to . As a consequence, cannot be a solution to (3.12) and therefore will be adjusted.

4 Numerical Study

In this section, we conduct a simulation study to examine the performance of the proposed method. Suppose the vector of control variables is and the true process is

(4.1)

Suppose the computer model is

(4.2)

where is the vector of the calibration parameters. Let be the set of the design points, where

’s are generated independently from the uniform distribution on

. The physical experimental observations are generated by

(4.3)

where the observation error ’s are mutually independent and follow .

We suppose the engineering design values of are . Let be the penalized orthogonal calibration of given by (3.12). We use a Gaussian function (2.5) as the kernel function . To determine the hyper-parameter in (2.5), we first build a Gaussian-process model to approximate the physical observations and estimate by using the maximum likelihood method. The BIC-type criterion (3.9) is used to choose the tuning parameter in the penalized orthogonal estimations of the sensible variables. We repeat the above simulation procedure 100 times to assess the average performance of the proposed method.

Recall that the optimal value of , denoted as , is defined as

(4.4)

It is worth noting that (4.4) can only define the sensitive variables, because with respect to the insensitive variables, the above objective function is plat. Using the standard sensitivity analysis method (Sobol, I. M., 2001), we identify that , , are insensitive parameters.

We compare the proposed method with the usual projected kernel calibration estimator, denoted by

, as well as the ordinary least squares estimator

(Tuo and Wu, 2015) defined as

(4.5)

We calculate the integrated error, defined as

to assess the performance of an estimator . Clearly, measures the discrepancy between the true process an the computer model. We compute the Monte Carlo average value of for each estimator mentioned above. We also include the optimal value for comparison. The results from the engineering design , , the projected kernel calibration estimation (2.8), the penalized orthogonal estimation (3.12) and the optimal values of the calibration parameters are summarized in Table 2. The values and the point estimates for - and are presented. We skip the estimates for , and because they are insensitive parameters and thus their optimal values do not exist.

Estimator
35.177 0 1 -0.5 0 0 7
24.046 0.017 0.027 -0.060 -0.707 0.054 7.523
27.712
22.628
22.581
Table 2: The performance of , , , and .

From Table 2, we can see that , , are relatively far from their optimal values, while other variables are either identical or close to their optimal values. Thus adjusting only , , is ideal, which is exactly what the proposed method does, according to Table 2. This result implies that the proposed method can successfully identify the sensible variables.

To assess the estimation accuracy regarding the sensible variables, we consider the relative error defined as

The results are summarized in Table 3. Clearly, the relative errors of the proposed method are much lower than those of the ordinary least squares and the projected kernel methods. This implies the proposed method can adjust the sensible variables effectively.

Variable
91% 33.33% 9.667%
89.05% 66.79% 2.154%
9.84% 7.87% 0.234%
Table 3: Relative errors of the estimators.

To better understand the performance of the proposed method, we present the curve of the estimators with respect to . Let . Figure 1 plots the average curves of each over the 100 simulation runs.

Figure 1: The relationship between and , where . The engineering design value of is .

Figure 1 shows that, as increases, , , , , , and decrease rapidly. Their values vanish even under a small , say . According to our method, , , , , , and are suggested as insensible variables. This is a correct results, as we have learnt from the sensitivity analysis that - are insensitive, and from Table 2 that are sensitive but insensible.

5 Composite Fuselage Simulation

FEA is an effective numerical technique for composite fuselage analysis. During the development of the variation model and quality control system for composite fuselage assembly process, an accurate finite element model is needed, which helps to increase the flexibility and efficiency of model development and control system design. ANSYS Composite PrepPost is an add-in module to the ANSYS Workbench and is integrated with standard analysis for composite parts. By using the Composite PrepPost, two kind of materials, carbon fiber and epoxy resin, are used to generate multiple fabrics with different geometric parameters. Fabrics can be stacked up depending on specific sequence and orientations. And then, stack-ups are used to generate sub-laminates and further integrated into a composite part. Both stress and deviations in addition to a range of failure criteria can be analyzed by using the finite element model.

In the finite element model of the composite fuselage, input variable, calibration variables, and output variables are summarized in Table 4. One input variable , actuator force, with eight levels is considered, as shown in Figure 2 left. Dimensional deformations in five key points, , are selected as computer outputs of the FEA, shown in Figure 2 left. The circumferential distances of these five key points in FEA are inches, respectively. In addition, when we calibrate the finite element model of composite fuselage, five calibration parameters are considered, including surface body thickness, support parameter, material thickness ratio (between carbon fiber and epoxy resin), fabrics orientation angle, and temperature. The engineering design values for calibration parameters are determined based on the engineering design of fuselage, or fixture constraints in the physical experiments. However, due to the inevitable manufacturing deviations, the real fuselages cannot be exactly same as design values. Here, we obtain the parameters from the literature (Feraboli et al., 2009) and other engineering background knowledge.

Figure 2: FEA simulation: input actuator force and dimensional outputs in five key points (left); physical experiments setup with dimensional outputs in nine raw measurement points (right).
Input actuator force
surface body thickness, 0.29-0.30 inch
support parameter, 2-4 inches
material thickness ratio, 19.5-22.5
fabrics orientation, 40-50 degree
temperature, 68-71 degree
Output
Table 4: Variables in the FEA.

In order to obtain the physical experimental observations, a real structural load experiment is set up to measure the deformations of the composite fuselage under different actuator forces. We ensure that the key parameters (e.g. length, width, thickness, and weight) are consistent between the computer simulated part and the real composite fuselage. In the validation test, the actuator force is changed from 0 to 650 pounds, as shown in Table 4. Because of the experimental setup, it is challenging to localize the 5 key points of FEA output accurately in our physical experiment. We could obtain the dimensional responses in multiple raw measurement points and then calculate the corresponding observations by linear interpolation. In order to cover the range of circumferential distance 0-37.71 inches in the FEA, dimensional deformations in nine points are measured, as shown in Figure 2 right. The circumferential distance of these nine points ranges from about 10 to 45 inches. We need to point out that the circumferential distance ranging from 0 to 10 inches is not measurable because of the fixture set-up of physical experiment. After we get the dimensional deformations in these nine raw measurement points, we compute the five physical experimental observations associated with same circumferential distances in the FEA by linear interpolation. Afterwards, we apply the proposed calibration method based on computer outputs and interpolated physical experimental observations to find the optimal magnitudes of model parameters.

All settings details of the finite element model and the physical structural load experiment are shown as follows.

  • Pre-specified design parameter values: ;

  • Physical design: ;

  • Interpolated physical experimental observations:
    ;

  • Computer design: ; where is a Maximum Latin Hypercube Design (Morris and Mitchell, 1995) for ;

  • Computer outputs .

Figure 3: Relationship between and with three different choices of : , and . gives the computer outputs in the third observation point.

Figure 3 shows the relationship between the computer outputs (associated with circumferential distance 18.85 inches) and the computer design with three different choices of :

, and . Figure 3 shows a linear relationship between and for each fixed . After a careful examination, we confirm that such a linear relationship between and holds for each . This result is consistent with our engineering knowledge that the elastic dimensional deformation should be a linear function with respect to the actuator force. Therefore, we use a linear model to link and ’s. The slope of this linear model is a function of . Hence, for the input and , we use following model to fit the th item of the computer output:

(5.1)

By applying the linear approximation (3.2), (5.1) becomes

(5.2)

The parameters are estimated by using the least squares method.

We calculate the total model loss for all outputs by using a weighted sum of the th empirical model loss as

(5.3)

where the weight corresponds to the relative importance of this measurement point. In the structural load experiment of the composite fuselage, different measurement points show different quality features, which have weights corresponding to relative importance. According to our engineering knowledge, we choose the weights as

Figure 4 shows the function relationship between and , where :

Figure 4: The relationship between and , where . The engineering design value of is .

It can be seen that and decay to zero rapidly as increases, which implies that and are insensible variables. Consequently, we do not need to adjust the engineering design parameter values of these three parameters. We also observe that does not decay as increases, and decays rather slowly. Therefore, and are sensible variables.

We are also interested in whether and are sensitive. We change the engineering value to . The corresponding results are shown in Figure 5. It can be seen that, in this case, and decay to zero rapidly as increases, while does not. This implies that, and are insensitive variables, and is sensitive, but its engineering design value is very close to .

Figure 5: The relationship between and , where . The engineering design value of is .

Next, we will show that, after sensible variable identification, the model loss can be dramatically reduced with a minimum amount of model adjustment. Table 5 gives the model loss defined in (3.7), and empirical model loss defined in (3.13) under some input settings. It is worth noting that the model loss cannot be computed given only the training data. Here we conduct an extra set of necessary validation computer runs so that can be computed numerically. In Table 5, we list 11 settings of the calibration parameters. The last column of Table 5 gives the number of adjusted calibration parameters, which can be regarded as the model complexity. The first row is the engineering design values . Under this setting, no calibration parameters are adjusted and thus its complexity is zero. Next, we consider the parameter estimation given by the projected kernel method (2.8) adjusting only sensible variables. Since we have already identified and as the sensible variables using the proposed method, we consider three cases: (i) the estimation only adjusting ; (ii) the estimation only adjusting and (iii) the estimation adjusting both and , denoted as and respectively. The model complexities for these three cases are one, one and two, respectively. We also list six calibration parameter values, denoted by , which are inputs of the training data with the smallest six model losses.

Values of Complexity
353.15 391.36 0
212.52 235.31 1
81.24 85.41 1
53.29 63.11 2
52.61 69.38 4
55.12 73.81 4
57.91 75.57 5
65.59 82.81 5