Knowledge about the basic behaviour of materials exposed to fire is extremely important to successfully develop fire safety strategies. Depending on the type and height of buildings, certain fire requirements need to be fulfilled, e.g. requirements w.r.t. the load-bearing function (R) and separating function (EI). In case of timber buildings, the performance not only of the timber members, but also of protective materials such as gypsum plasterboards and insulations is of high importance for the fire design of the building structure. These different materials are usually combined to build floor and wall elements with different layups, so-called timber frame assemblies. The separating function of timber frame assemblies is usually verified using the component additive method (CAM) (Frangi et al., 2010; Mäger et al., 2017; Just and Schmid, 2018). This method is rather flexible for calculating the separating function because it handles arbitrary layups made of various materials and thickness. Producers of fire protection products (e.g. gypsum plasterboard and insulation) need to determine input factors for the individual materials so that the separating function of a timber frame assembly with these materials can be verified using the CAM.
Indeed, the same protective material (e.g. a gypsum plasterboard with a given thickness) contributes differently to the fire resistance of a timber frame assembly in different setups. The CAM therefore considers (1) the material and thickness of a layer and (2) modification factors taking into account the neighbouring layers. This leads to a high number of possible combinations for timber frame assemblies, which cannot all be tested in fire resistance tests. Therefore, the factors of the CAM are usually derived based on finite element (FE) models and accompanying fire resistance tests.
Fire resistance tests using the standard EN/ISO temperature time curve according to ISO 834-1:1999 (1999) and EN 1363-1:2012 (2012) constitute the basis for these simulations. In the fire tests, the temperature is recorded over time inside the specimen at specific distances to the fire exposed surface. These recordings are used as a reference for FE simulations of the same setup. Heat transfer models using effective thermal material properties that depend on the temperature (specific heat capacity , thermal conductivity and material density ) are usually employed to simulate the temperature development inside timber frame assemblies exposed to fire. These material properties are then calibrated such that the output matches the recorded temperatures. Since these properties are not strictly physical quantities, they are called effective material properties. They indirectly account for fissures, cracks and moisture flow inside the specimen (Frangi et al., 2010).
The conventional process of determining these effective material properties is slow and inaccurate as the calibration is usually done manually. All temperature measurements are averaged, thus eliminating the variability in the material behaviour and not accounting for it in the calibration. The derivation of thermal material properties is conventionally done as follows (Mäger et al., 2017):
Assume effective thermal material properties and simulate a specific layup with FE heat transfer models;
Compare the resulting temperatures with the averaged measurements;
A more rigorous calibration of these effective material properties can be achieved by parameterizing the thermal time-dependent material properties with a set of model parameters. Through this parametrization the problem of determining the time-dependent effective material properties is recast as a problem of determining the real-valued parameterizing model parameters. Then the calibration problem can be posed in a probabilistic setting. This allows a proper treatment of uncertainties arising from material fluctuations, measurement errors and model insufficiencies. One general way to do this is the so-called Bayesian inversion framework (Beck and Katafygiotis, 1998; Gelman et al., 2014; Yu et al., 2019)
. In this framework, the model parameters are seen as random variables. Instead of trying to determine one particular value for these parameters, this probabilistic framework determines the full probability distribution of the model parameters conditioned on the observed measurements. This distribution contains much more information about the calibrated properties than the single point estimate from the conventional approach. For example, it allows computing expected values, maximum a posteriori estimates, confidence intervals on the calibrated values and the full correlation structure. Furthermore, the calibration can be verified easily by computing theposterior predictive distribution (Gelman et al., 1996).
To determine the probability distribution of the material properties, it is necessary to repeatedly evaluate the FE heat transfer model. To reduce the computational burden associated with repeated model evaluations, it has become customary to replace the computational forward model with a cheap-to-evaluate surrogate. Therefore, the Bayesian inversion framework is here combined with the polynomial chaos expansions (PCE) surrogate modelling technique (Sudret, 2007; Blatman, 2009; Guo et al., 2018).
When working with models with multiple input parameters, the question of the relative importance of individual parameters with respect to the output arises naturally. Quantifying this influence is called sensitivity analysis. One family of approaches are the so-called variance decomposition techniques (Saltelli et al., 2000; Arwade et al., 2010). These methods attempt to apportion the variance of the probabilistic model output to the individual input parameters. The Sobol’ indices are one such variance decomposition technique (Sobol’, 1993). Determining the Sobol’ indices is typically computationally expensive, but it has been shown by Sudret (2006) that they can be computed easily for a PCE surrogate model. Their computation allows valuable insights into the heat-transfer model’s properties.
In this paper, the material properties of four different gypsum insulation boards (Products A-D, E1-E4) are calibrated based on fire resistance tests carried out with these materials. These experimental results are presented and discussed in detail in Section 2. The calibration is carried out with the Bayesian model calibration framework accelerated by constructing a PCE-based surrogate model as detailed in Section 3. Section 4 outlines how the employed surrogate model can be used to conduct a global sensitivity analysis of the considered computational model. Finally, in Section 5, the calibration is verified using two fire tests that use two of the calibrated materials (Product C and Product D) in different experimental setups (V1 and V2).
2 Experiments and modelling
Two fire resistance tests with horizontally oriented specimens constitute the experimental basis for the analysis in this paper (Just, 2016; Breu, 2016). The unloaded tests were conducted on the model-scale furnace of SP Wood Building Technology (today’s Research Institute of Sweden, RISE) and were exposed to the EN/ISO temperature-time curve (EN 1363-1:2012, 2012; ISO 834-1:1999, 1999). Two (V1 and V2, (Breu, 2016)), respectively four (E1 to E4, (Just, 2016)) different gypsum plasterboard setups with dimensions of were tested in each test (Figure 1). Tables 1 and 2 show the layups of the specimens. The space between and around the specimens was at least 100 mm and was filled with Product D boards to protect the carrying layer, the last layer. The carrying layer was a 19 mm particle board with density kg/m3 in Test 1 (specimens E1 to E4) and a Product D 15 mm in Test 2 (specimens V1 and V2). Figure 2 shows the specimens of Test 2 during fabrication.
In specimens E1 to E4 (Test 1), five wire thermocouples and one copper disc thermocouple measured the temperatures at a single interface between the layers. The fluctuations of the sensor readings can mainly be attributed to variations in the material properties. The measured temperatures are displayed in Figures LABEL:sub@fig:dataSummaryE1 to LABEL:sub@fig:dataSummaryE4.
In specimens V1 and V2 (Test 2), three wire thermocouples were placed between each layer. The measured temperatures at the two interfaces 1 and 2 are displayed in Figures LABEL:sub@fig:dataSummaryV1 and LABEL:sub@fig:dataSummaryV2.
The temperature measurements at each sensor were collected at discrete time steps :
where and . For simplicity, the superscript is omitted unless required to distinguish between individual measurements. Therefore, in the sequel
stands for a vector of measurements captured by a single sensor.
|Layer 1||Layer 2|
|E1||Product A||12.5 mm||particle board||19 mm|
|E2||Product B||9.5 mm||particle board||19 mm|
|E3||Product C||12.5 mm||particle board||19 mm|
|E4||Product D||15 mm||particle board||19 mm|
|Layer 1||Layer 2+3|
|V1||Product C||12.5 mm||Product D||215 mm|
|V2||Product D||15 mm||Product D||215 mm|
2.2 Forward modelling
The experiments described in the previous section were modeled using one-dimensional heat transfer FE-models. The reduction to a one-dimensional setup is justified, because it is known that the experimental heat-flux is mostly perpendicular to the exposed surface. The simulations were conducted using the general purpose finite element software Abaqus (Abaqus FEA, 2017). The energy input on the heated surface and the losses on the unexposed side took into account the energy input/loss through convection and radiation.
The radiation temperature was assumed to be equal to the gas temperature and followed the EN/ISO temperature-time curve EN 1363-1:2012 (2012); ISO 834-1:1999 (1999) on the exposed side and was constantly on the unexposed side (as in the experiments). The emissivity was taken as and the convection coefficient as according to EN 1991-1-2:2002 (2002). The element size in the FE-mesh was . The temperature-dependent material properties of the particle board (Test 1) were taken from Schleifer (2009).
The unknown parameters of interest are the temperature-dependent effective material properties of the insulation material: the thermal conductivity , the heat capacity and the material density of the investigated insulation materials.
With this, the computational forward model is
For every set of material properties, this model returns the temperature evolution at locations and at times where measurements are available (see Section 2.1).
Since each of those parameters is a function of the temperature, they cannot be directly calibrated. Instead, these functions have to be parameterized with a set of scalar parameters, as described next.
2.3 Parametrization of material properties
The parametrization of the three temperature-dependent material properties () is a crucial step of the calibration procedure. It consists of specifying a set of parameters that define the shape of the temperature-dependent function that describes each material property. The choice of these parameters is delicate, as it imposes a certain temperature-dependent behaviour on the material properties. A priori, there are no physical constraints on this thermal behaviour besides positivity, so generally, the properties are defined as , and .
One further complication lies in the fact that these properties are mere effective properties and cannot generally be measured. To find constraints on these parameters, it is thus necessary to rely on previous calibration attempts of gypsum insulation boards (Breu, 2016; Schleifer, 2009) in conjunction with measurements of certain properties, where available.
By gathering information from such previous attempts, the thermal properties are parameterized by six parameters . This parametrization is flexible enough to enable inference on and follows physical and empirical reasoning as described next.
We propose to distinguish two key processes during which the temperature-dependent material properties change significantly:
- First key process
When the free water content in the gypsum insulation boards evaporates, the latent water content of gypsum, which is composed of sulfate dihydrate CaSO4 2 H2O, evaporates. In this process, evaporation first forms calcium sulfate hemihydrate CaSO4 1/2 H2O (also called bassanite) and then anhydrate III CaSO4. The evaporation consumes heat, which is modelled as a local increase of the specific heat capacity, a reduction in the conductivity and a reduction in the material density.
- Second key process
Thermogravimetric analyses have shown a second peak in the specific heat due to chemical metamorphosis at elevated temperatures of secondary components found in the gypsum insulation boards (Schleifer, 2009). This second key process is modelled as an increase in the material conductivity, a peak in the specific heat and a further reduction of the material density.
The temperatures at which these two key processes occur cannot be equally well prescribed a priori. While the temperature of the water evaporation is well known to occur at approximately with its main effect taking place at until it tails off at , the second key process cannot be characterized this precisely. It is assumed that the second key process starts at the unknown temperature and ends at . Additionally, the relative location of its main effect between and is parameterized with . These two temperatures heavily influence the evolution of the thermal properties and are thus used as temperatures of change in all effective material properties.
In the present setting of heated gypsum boards, the initial conductivity at ambient temperature is assumed to be (Breu, 2016). During the first key process, the conductivity is assumed to linearly decrease to a second value that is parameterized by . Starting with the second key process the conductivity starts to increase linearly to another value that is parameterized by , reached at the highest simulation temperature of .
Phase changes require a significant amount of energy. To model the energy requirement associated with the evaporation of water trapped inside the insulation material, the specific heat is modelled with two piecewise linear spikes during both key processes, while being constant at (Schleifer, 2009) for the other temperatures. The specific heat at the peaks is parameterized by for the first process and for the second one.
During the first and second key process, gaseous products are emitted (water and carbon dioxide respectively) and thus the density of the gypsum reduces. This density reduction was studied in Schleifer (2009) and the results are applied here directly. Starting from the density measured at room temperature , linearly reduces during the first key process to . It then remains constant and linearly reduces further to from the start of the second key process to the main effect of the second key process. The parametrization of the material properties is visualized in Figure 4.
To finalize the parametrization, reasonable ranges are defined for all parameters. These ranges correspond to bounds on the parameters that are the results of prior calibration attempts along with expert judgement. These ranges are given along with a summary of the parameters in Table 3 with plots of the resulting temperature-dependent material properties in Figure 5.
These six parameters are gathered into a vector , which fully characterizes the temperature-dependent behaviour of the gypsum insulation boards.
|Start of second key process|
|Main effect of second key process||-|
|(relative between and )|
2.4 Finite element model
The FE model is considered as a verified simulator for the transient heat propagation in gypsum insulation panels under fire exposure. This means that the model is assumed to accurately solve the underlying differential equations posed by the mathematical heat transfer model. For a review of techniques for rigorous model verification see Oberkampf et al. (2004); Oberkampf and Roy (2010).
The FE model yields a discretized time-dependent temperature curve for each realization of the parameter vector that parameterizes the effective thermal properties , and . The discretization of the time steps is identical to the available measurements, so that with .
3 Bayesian model calibration
3.1 The calibration problem
The process of finding model parameters so that the model evaluation using this parameter vector agrees with some observations is called calibration. A general probabilistic framework for calibration is presented next. For simplicity, it is assumed that only one measurement series is available for now. This restriction is lifted in Section 5.
Because models are in any case simplifications of reality, all model predictions are approximation of the real physical behaviour. Additionally, all observations in the real world contain measurement errors. To explicitly account for this combined mismatch between model output and observations, one option is to model the discrepancy as an additive mismatch between the model predictions and the observations:
Following the definition of the additive discrepancy, the straightforward inversion of the computational model would be expressed by
However, even for models that had a well-known discrepancy between their output and the observations, this operation is typically not possible due to the non-invertible, black-box nature of most engineering models.
3.2 The Bayesian approach
A different way to address the calibration problem is to reformulate it in a probabilistic setting. The unknown model discrepancy from Eq. (3) is then seen as a random vector. Commonly and in this work,
is assumed to follow a zero mean normal distribution with a covariance matrix parameterized by a set of parameters:
Additionally, in the probabilistic setting, the combined parameter vector is assumed to be distributed according to a so-called prior distribution (to be further specified)
Then the discrepancy distribution from Eq. (5) can be used together with Eq. (3) to construct a model giving the probability of the observations given a realization of the parameter vector. Denoting by a realization of , this probability as a function of the parameters is the so-called likelihood function
which reads more explicitly:
With these definitions, it becomes possible to apply the Bayes’ theorem for conditional probabilities(Gelman et al., 2014):
where is the prior distribution of the input parameters , is the posterior distribution and is a normalizing factor called evidence.
The probability distributions in this expression can be interpreted as degrees of belief about the parameters (Beck and Katafygiotis, 1998). Low values of the distribution at a realization indicate low confidence in this particular value, whereas high values indicate high confidence. With this interpretation of probabilities, Eq. (9) encodes the shift of belief about the parameter vector from before to after the observation of experiments . This process is called Bayesian updating, Bayesian inference or Bayesian inversion.
As mentioned above, the probability distributions in Bayes’ theorem are named according to their information content about the parameters in the setting of the updating procedure:
- Prior distribution :
this distribution captures the belief about the parameters before (prior to) observing data. In the setting of Bayesian updating for model calibration, it is chosen according to expert opinion and possible prior calibration attempts. A typical choice is to select a reasonable, although sufficiently large range (lower/upper bounds) for each parameter.
- Posterior distribution :
the posterior distribution is the conditional distribution of the parameters given the observations. It can be regarded as the state of information about the parameters after ( to) making observations.
Thus, the computation of the posterior distribution can be considered as the solution of the calibration problem. Since it is a probability distribution rather than a single value, it encompasses all information specified by the prior distribution and the newly observed data. Already conceptually it is thus a much broader way of defining calibration than single value estimators.
Another probability distribution of interest in the Bayesian inference setting is the posterior predictive distribution . It is defined as
This distribution expresses beliefs about future (predictive) observations given the already observed ones . If it is possible to sample from the posterior distribution, , a sample from the posterior predictive distribution is obtained by drawing
The posterior predictive distribution allows to assess the predictive capabilities of the model following calibration. It contains the uncertainty about the model parameters and the mismatch parameters . Because this distribution is defined in the space where the data are collected, it can be used to visually check the calibration results.
3.3 Sampling from the posterior distribution
The analytical computation of the posterior distribution is typically not possible. This is mostly due to difficulties in evaluating the normalizing constant defined in Eq. (9). Its computation typically relies on estimating an integral in the parameter space, which is in most cases intractable.
, completely avoids the need to evaluate this high-dimensional integral. It is a type of stochastic simulation technique that constructs Markov chains that are guaranteed to produce samples distributed according to the posterior distribution. Posterior characteristics (quantities of interest, expected values, marginal distributions etc.) can then be estimated using this sample.
Following the initial development of the MH algorithm (Metropolis et al., 1953; Hastings, 1970), recent developments to improve the algorithm’s efficiency strived towards adaptive proposal distributions (Haario et al., 2001; Roberts and Rosenthal, 2009) and the utilization of gradient information (Rossky et al., 1978; MacKay, 2003). One common flaw of these algorithms, however, is the requirement to tune them using a set of tuning parameters. This is a particularly tedious task that is a major source of error in practical applications.
The affine-invariant ensemble sampler (AIES, (Goodman and Weare, 2010)) is a fairly recent MCMC algorithm that performs particulary well in this respect. This algorithm requires only a single tuning parameter and its performance is invariant to affine transformations of the target distribution. This property makes it particularly useful for real-world applications, where strong correlations between individual parameters often hinder conventional MCMC algorithms.
The AIES algorithm relies on a set of parallel chains where proposal samples are obtained by moving in the direction of a randomly chosen conjugate sample from a different chain. The pseudo-code for the implementation used in this paper is given in Algorithm 1.
A property that makes MCMC algorithms especially suitable for Bayesian computations is that they do not require the explicit computation of the normalization constant (from Eq. (9)), as only a posterior ratio, called acceptance ratio, is required (Step 9 of Algorithm 1). In this ratio, cancels out. However, the computationally expensive forward model must be evaluated each time this acceptance ratio is computed. This necessity of many runs of computationally expensive models has spurred the idea of constructing a surrogate model that, after successful construction, can be used in the MCMC algorithms in lieu of the original model, whereby the overall computational burden is reduced to feasible levels.
3.4 Surrogate modelling of the temperature time series
Often, sampling based techniques (MCMC algorithms) are considered infeasible because of the high number of computationally expensive model runs required. Surrogate modelling techniques try to solve this problem by constructing a computationally cheap emulator that can be used instead of the original model.
Non-intrusive approaches to construct surrogate models are solely based on realizations of the model parameters and corresponding model outputs (Choi et al., 2004). The set of parameters used for constructing the surrogate is referred to as experimental design, which means here a set of computer experiments and shall no be confused with physical experiments. Following the assembly of the experimental design, the constructed surrogate model aims to approximate the original model predictions denoted by ,
This section details the construction of a surrogate model combining polynomial chaos expansions (PCE) with the principal component analysis (PCA).
3.4.1 Polynomial Chaos Expansions
Polynomial chaos expansions (PCE) are a surrogate modelling technique that has been used extensively in the engineering disciplines (Xiu and Karniadakis, 2002; Soize and Ghanem, 2004; Guo et al., 2018) to construct surrogate models of scalar-valued functions of random variables. A brief introduction to the method is presented next.
Assume a random vector with mutually independent components
. Its joint probability density function is thus given by
The functional inner product of two polynomials of degree and respectively, is then defined by
By choosing these polynomials to fulfil , if and 0 otherwise, these polynomials form a family of univariate orthonormal polynomials . There exist well-known families of polynomial functions that fulfil the fundamental condition of Eq. (14) w.r.t. standard parametric probability distributions (Askey and Wilson, 1985).
These univariate polynomials can be used to build multivariate
polynomials by tensor product. Introducing the multi-indicesthe latter are defined by:
It can be shown that the univariate orthonormality property of Eq. (14) extends to the multivariate case and that the following holds:
These polynomials form a so-called orthonormal basis of the space of square integrable functions with respect to the probability distribution . Any such function can be represented by:
where are the coefficients of the expansion.
In practical applications it is not feasible to compute the infinite number of coefficients . Instead, a truncation scheme is typically proposed that reduces the number of considered polynomials to a finite set. This truncated set denoted by transforms the equality of Eq. (17) to an approximation
In regression-based approaches, the coefficient vector is typically estimated by least-squares analysis, as originally proposed in Berveiller et al. (2006). This corresponds to selecting a truncation set (Blatman and Sudret, 2011a) and using an experimental design to minimize the expression
By storing the function evaluations at in a vector the solution of Eq. (19) reads:
where are the evaluations of the basis polynomials on the experimental design .
To assess the accuracy of the obtained polynomial chaos expansion, the so-called generalization error shall be evaluated. A robust error measure can be obtained by using the leave-one-out (LOO) cross validation technique. This estimator is obtained by
where is constructed by leaving out the -th point from the experimental design. After some algebraic manipulation, it can be shown that the LOO error can be computed as a mere post-processing of the PCE expansion as follows
where is the component of the vector given by:
for more details refer to Blatman and Sudret (2010).
This section outlined the approach to use PCE for approximating scalar quantities. Since the heat transfer model considered in this paper returns a vector of interface temperatures at time steps, a pure PCE approach would require the construction of independent polynomial chaos expansions. Instead, a dimensionality reduction technique on the output is applied before using the PCE technique.
3.4.2 Principal Component Analysis
Because the discretized temperature evolution is expected to be rather smooth (see Figure 11), considerable correlation between the individual time steps is expected. This correlation can be exploited to reduce the dimensionality of the output in the context of surrogate modelling.
There exist numerous so-called dimensionality reduction techniques (van der Maaten et al., 2008), one of which is principal component analysis (PCA, Jolliffe (2002)). The latter utilizes an orthogonal transformation to express in a new basis of uncorrelated principal components .
In practice, PCA is carried out by computing estimators of the expectation and the covariance matrix . The eigenvectors of this covariance matrix are denoted by for
. The associated eigenvaluecorresponds to the variance of in direction of the -th principal component. Thereby the random vector can be expressed through its principal components as .
The model output can then be compressed to a lower dimensional subspace by retaining only those principal components with the highest variance:
The number of terms is selected such that , with typically chosen as . This way, the model output
can be approximated by a linear transformation of the principal component vectorthereby reducing the problem dimensionality from to .
3.4.3 Combining PCA with PCE
The combination of PCA with PCE gives rise to an efficient surrogate modelling technique as shown originally in Blatman and Sudret (2011b). Constructing polynomial chaos expansions of each retained principal component , together with the PCA formulation from Eq. (24) yields a surrogate model relating the model parameters to the vector valued time series output of the transient heat transfer problem:
which can be rewritten by introducing the union set :
For compactness, this equation can also be expressed in matrix form by letting be a matrix containing the retained eigenvectors . For the PCE part of the equation the vector is introduced that holds the individual multivariate orthogonal polynomials. Let be a matrix that stores the corresponding PCE coefficients, then Eq. (25) can be written as
For completeness, the response can also be expressed for each random variable individually. For this, the row vector , taken from the -th row of the matrix of eigenvectors , is introduced:
This surrogate model can then be used in lieu of the original computationally expensive forward model. The evaluation of the surrogate model is orders of magnitude faster than the original finite element model. For comparison, in our application example a single FE run takes about on a conventional computer, while in the same time evaluations of the surrogate model can be made.
This reduction in computational time is a promising feature of the presented surrogate modelling technique. It does, however, come at the cost of a series of approximations that are introduced during the PCA and PCE computation. To ensure confidence in the produced surrogate model, a general error measure has to be devised. It includes the approximation error due to the PCA truncation and the truncated polynomial chaos expansion. Such an error measure was derived in Blatman and Sudret (2013). For the sake of completeness the details are given in A.
3.5 Summary of the proposed method
In this section, a procedure to efficiently conduct Bayesian inference with expensive vector valued models was presented. It is assumed that the parametrization of the temperature-dependent effective material properties (see Section 2.3) is known. Bayesian inference then aims at determining the distribution of the parameters after observations (experimental measurements) have been made. A brief step-by-step account of this procedure is given below for reference:
Choose a prior distribution on and construct an experimental design using samples from this prior. Evaluate the forward model at and store the evaluations in .
Approximate using the surrogate model from Eq. (27). This requires the combination of the dimensionality reduction technique PCA with the PCE uncertainty propagation technique.
Define a likelihood function from Eq. (8) that captures the discrepancy between a model run and the observations.
Run the AIES defined in Algorithm 1 where the likelihood function uses the surrogate model instead of the original model to obtain a sample from the posterior distribution .
This method works if the support domain of the prior distribution contains that of the posterior distribution. In this respect, sufficiently large prior ranges shall be selected based on the expert’s judgment.
The successful calibration of the parameters through the Bayesian inference approach gives insight into the model mismatch and correlation structure between individual parameters. The distribution of the parameters can further be used in probabilistic analysis using these models and, given new observations, can be updated to reflect beliefs incorporating the newly acquired information.
A fundamental ingredient of the presented approach is the necessity to define a parametrization of the thermal effective material properties as described in Section 2.3. To judge the quality of the parametrization, it can be helpful to assess the relative importance of a single model parameter with respect to the output. For this, it is necessary to resort to the field of sensitivity analysis.
4 Sensitivity analysis
4.1 PCE based Sobol’ indices
Global sensitivity analysis aims at finding which input parameters of a computer model (or combination thereof) explain at best the uncertainties in the model predictions. In this respect, variance decomposition techniques rely on assigning fractions of the model output variance to the individual input parameters . For simplicity, in this section the subscript from the parameter vector is dropped.
Consider a scalar-valued computational model , which maps a vector of input parameters in the unit hypercube to the real numbers. This computational model can be decomposed into a sum of terms that only depend on a subset of the input parameters, a constant , univariate functions , bivariate functions
This decomposition is called the Hoeffding-Sobol’ decomposition and is unique for any function that is square-integrable over the unit hypercube (Sobol’, 1993).
Denoting by a subset of indices, Eq. (29) can be written in short:
It can be shown that the terms of this equation called summands, are orthogonal (Sobol’, 1993). The variance of each term , called partial variance, is obtained by:
Due to the orthogonality of the terms in this equation, the total variance of the model output is finally obtained as the sum of the partial variances
Each partial variance describes the amount of the output variance that can be attributed to the interaction of the input variables . In particular, describes the fraction of the variance that can be attributed to one input variable taken separately.
Moreover, the total contribution to the variance attributable to a single input parameter is captured in the sum of the partial variances that contain the -th input variable. The sum of these partial variances normalized by the total variance is called the -th total Sobol’ index and is defined as
Usually the integral in Eq. (31) can be computed through Monte Carlo integration. However, if the model is expressed in an orthogonal basis (as is the case for PCE, Eq. (17)), the Sobol’ indices can be computed analytically by post-processing the PCE coefficients (Sudret, 2006, 2008):
is the set of multivariate polynomials that are non-constant in the -th input parameter . For scalar-valued models, this yields a measure of the variance fraction that can be attributed to a certain input parameter. In the following section, this concept is extended to models with multiple outputs.
4.2 PCA-based Sobol’ indices
In the present paper models with multiple outputs (time-series of computed temperatues) are considered. By using a surrogate model that combines PCE with PCA, as discussed in Section. 3.4.3, the total Sobol’ indices for each output vector component (time step) can also be computed analytically (Marelli and Sudret, 2015; Nagel, Rieckermann, and Sudret, Nagel et al.).
For this, the partial variances of the model response components are computed by using the expression from Eq. (28). The total Sobol’ index for the -th component of the output random vector then reads
where is the subset of for which . The interested reader is referred to B for the derivations.
In this section, the procedure presented in Sections 3 and 4 is applied to calibrate the temperature-dependent material properties of gypsum based insulation boards. The experimental data stems from experiments conducted by Breu (2016) and Just (2016) that were presented in Section 2.1. As explained in Section 2.3, the material properties are parameterized with a set of parameters. In the Bayesian inference framework introduced in Section 3.2, determining the posterior distribution of these parameters constitutes the calibration of the temperature-dependent material properties.
To further investigate the effects of the introduced parametrization, the surrogate models used for calibration are reused to conduct time-dependent sensitivity analyses (Section 4). These analyses show the influence each model parameter has on the simulation output. They deliver valuable insights and can be used to further refine the model parametrization.
Finally, the calibrated time-dependent material properties are validated by simulating insulation panels in a different measurement setup and comparing these simulation results with actual measurements.
5.1 Calibration of material properties for gypsum boards
In this section the general calibration procedure from Section 3.5 is applied to the specific problem of calibrating heat transfer models describing the experiments of specimens E1-E4 (Test 1) presented in Section 2.1.
The model parameters
are assumed to be priorly independent and uniformly distributed with the lower and upper bounds (and respectively) defined by the ranges given in Table 3. The prior distribution of the model parameters is thus given by
Since multiple measurements are available for each experiment, the formulation for the likelihood Eq. (7) has to be slightly adapted. Under the assumption of independence between the individual measurement locations, it can be written as the product
which generalizes Eq. (7) where only a single time series of measurements was considered. Consequently, the posterior distribution obtained from Bayes’ theorem should strictly be written as , but for notational simplicity the superscript is again dropped.
The covariance matrix is parametrized by
where we choose a so-called Matérn autocorrelation function ():
In this autocorrelation function, is the correlation length and
is the standard deviation at the-th time step. To reduce the number of calibration parameters, it is assumed that the standard deviation follows a degree- polynomial function
where is the -th Legendre polynomial (defined over ) and are the coefficients to be calibrated. As a summary, there are parameters to define our discrepancy term, namely which define the non-stationary variance of the model discrepancy, and the autocorrelation length . Using the notation from Section 3.2, we pose . To complete the prior information, a uniform distribution is assumed for the discrepancy parameters with non-restrictive bounds. A summary of the full prior distribution is given in Table 4.
Figure 6 shows the resulting sample points of the posterior distribution obtained for an exemplary calibration run for the E1 setup. This sample was produced using the previously presented AIES algorithm (Algorithm 1).
Despite the broad information contained in the full posterior plot, one is often interested in the set of parameters that best describe the observations. In accordance with the Bayesian interpretation of probabilities, this parameter set is located at the maximum value of the posterior distribution (maximum a posteriori, MAP). It can be found by solving the optimization problem
This problem can be approximately solved by picking the parameter point from the available posterior sample that maximizes the unnormalized posterior distribution . The resulting maximum a posteriori estimator is also shown in Figure 6.
The calibrated posterior parameters for Product A (E1) are summarized in Table 5. It gives an overview of the calibrated parameters, including a set of summary statistics.
A major advantage full samples have compared to point estimators is that they allow investigating characteristics of the posterior distribution. This provides a fuller picture of the calibrated parameter vector,by showing dependence between individual parameters , allowing the computation of confidence intervals or revealing problems with identifiability.
Additionally, the full parameter distribution explains why it can be hard to calibrate with the conventional approach. When strong correlations exist, such as for and in Figure 6, it is hard to move to a better guess by changing just one parameter.
In conclusion, the reduction of the standard deviation in all posterior parameters in conjunction with the unimodal posterior distribution can be seen as an indicator of a successful calibration.