The synthesis of data from instrumented structures and physics-based models via Gaussian processes

by   Alastair Gregory, et al.

A recent development which is poised to disrupt current structural engineering practice is the use of data obtained from physical structures such as bridges, viaducts and buildings. These data can represent how the structure responds to various stimuli over time when in operation, providing engineers with a unique insight into how their designs are performing. With the advent of advanced sensing technologies and the Internet of Things, the efficient interpretation of structural health monitoring data has become a big data challenge. Many models have been proposed in literature to represent such data, such as linear statistical models. Based upon these models, the health of the structure is reasoned about, e.g. through damage indices, changes in likelihood and statistical parameter estimates. On the other hand, physics-based models are typically used when designing structures to predict how the structure will respond to operational stimuli. What remains unclear in the literature is how to combine the observed data with information from the idealised physics-based model into a model that describes the responses of the operational structure. This paper introduces a new approach which fuses together observed data from a physical structure during operation and information from a mathematical model. The observed data are combined with data simulated from the physics-based model using a multi-output Gaussian process formulation. The novelty of this method is how the information from observed data and the physics-based model is balanced to obtain a representative model of the structures response to stimuli. We present our method using data obtained from a fibre-optic sensor network installed on experimental railway sleepers. We discuss how this approach can be used to reason about changes in the structures behaviour over time using simulations and experimental data.



There are no comments yet.


page 1

page 2

page 3

page 4


Physics-Guided Machine Learning for Scientific Discovery: An Application in Simulating Lake Temperature Profiles

Physics-based models of dynamical systems are often used to study engine...

PhysiNet: A Combination of Physics-based Model and Neural Network Model for Digital Twins

As the real-time digital counterpart of a physical system or process, di...

Physics Guided RNNs for Modeling Dynamical Systems: A Case Study in Simulating Lake Temperature Profiles

This paper proposes a physics-guided recurrent neural network model (PGR...

Improved VIV response prediction using adaptive parameters and data clustering

Slender marine structures such as deep-water riser systems are continuou...

Modelling variability in vibration-based PBSHM via a generalised population form

Structural health monitoring (SHM) has been an active research area for ...

A Bayesian methodology for localising acoustic emission sources in complex structures

In the field of structural health monitoring (SHM), the acquisition of a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The engineering research fields of structural health monitoring (SHM) and structural identification (SI) involve the collection and analysis of data from instrumented structures [10, 9, 5, 1]. The primary objectives of SHM are to better understand the behaviour of structures (e.g. bridges, buildings, railway tracks, pipelines, tunnels etc.) over time and to identify and localise damage [6]. Novel sensing technologies such as fibre-optic and piezoelectric sensors have enabled the detailed collection of performance data from structures [37]. These data are opening up the possibility for engineers to better reason about the structural condition of structures through statistical analysis (e.g. linear models [23]). Typical examples of response data collected from sensor networks on instrumented structures are in the form of vibration/acceleration, strain and acoustic responses [45]. The data provides underlying information about how these structures respond to stimuli, and also includes noise due to various factors such as temperature effects and instrumentation error. Data is rarely measured across the whole domain of the structure either due to it being infeasible or too expensive [32]. Therefore the response of the structure is only known at the sensor locations.

Most structures are designed and analyzed based on physical assumptions about how they should respond under various stimuli; these assumptions can be described using idealised mathematical models. These models typically are in the form of partial/ordinary differential equations

[12, 35, 46]

. Partial differential equations (PDEs) often do not admit closed form solutions. Therefore they are often solved using e.g. finite element methods (FEM)

[8]. Quantifying the uncertainty in the physically444In the remainder of the paper we shall refer to the mathematical model of the physical system simply as the ‘physics-based model’. modelled response of structures from PDEs, where there exists uncertainty in the input parameter, is a well-studied field in applied mathematics (e.g. for example data-assimilation, stochastic Galerkin methods, multilevel Monte Carlo [3]

and Markov chain Monte Carlo

[31]). There are several discrepancies between the observed data and the idealised physics-based model. First, the data represents the response of the sensor network not the structure: the underlying response of the structure is corrupted by the sensors. Second, the physics-based model is based on assumptions such as constant temperature and simplified structural properties. Further, the physics-based models do not take into account the instrumentation error (the error induced by the sensors). In the SHM literature it is not clear how to account for these discrepancies in a well-principled fashion that leads to a representative model of the data.

Hybrid approaches are commonplace in literature, utilising data accrued from instrumented structures to improve predictions of structural response to stimuli from these physics-based models. A commonly used example of a data/physics hybrid approach is model updating [29, 14, 33, 2, 40]. In model updating, the parameters of the physics-based model are estimated using the measured response data; predicted responses of the structure are then given by the model with the estimated parameters. The parameters are typically estimated by optimising an objective function, such as the likelihood, which quantifies the discrepancy between the model and data [33]. The predictive quality of the model updating approach relies on the physics-based model not incorporating too many simplifying assumptions and being closely representative of the actual response. In model updating, the physics-based model is treated as the true underlying data generating process [24]. Here however, we assert that the observed data is the closest representation of the true structural response. Therefore the data should have an important role in the overall modelling procedure and prediction of structural response beyond estimating parameters.

In this paper, we introduce a novel modelling approach for structural response that balances the information from the observed data and the physics-based model. This will be referred to as a type of ‘data-centric engineering’ (DCE) model, inline with the discussion in [22]. The information is obtained from the physics-based model by simulating data from it. These simulations are combined with observed data from the physical structure using a multi-output Gaussian process joint model [20, 26]. This model is constructed on the level of the geometric relationship between the physical quantities of the observed data and model simulations. Predictions of the structures response can be obtained via this joint model. Unlike model updating, not only is observed data utilised to directly improve the physics-based model, but also the physics-based model is used to guide the inferred posterior from the data in unmeasured regions of the structural domain. A multi-output Gaussian process model is also utilised in [47]; this differs from the work presented in this paper as it combines simulated data from two physics-based models with different fidelities (accuracies), instead of combining simulation from a physics-based model and observed data. Gaussian processes have also recently been used for other aspects of structural health monitoring in [25, 42, 38, 13].

The benefits of the presented work are two-fold: First, through the proposed joint model one obtains aposteriori estimates for the response of the structure, inferred from both observed data and simulations from an analytical physics-based model. These two sources of information are balanced in order to improve the predictive performance of the posterior in regions where there is no measurement data available. Second, the modelling approach used estimates structural parameters within the physics-based model, even parameters governing components of the modelled system that cannot directly be measured by sensors. These capabilities make this work an important stride forward in the construction and evaluation of DCE models.

The response of railway sleeper beams (the components which carry the rail track and transfer the train axle forces into the supporting ballast and subgrade materials) will be used as a running example throughout this paper to demonstrate the proposed methodology. These sleepers can be modelled by the Euler-Bernoulli equation. This provides a simplified analytical physics-based model for the vertical deflection of the actual railway sleeper, although this can be generalised to richer models such as those that require the use of FEM. A core objective of SHM is to be able to reason about and detect structural change over time. The proposed multi-output Gaussian process model for the response of the railway sleeper can achieve this when used alongside change-point detection schemes. This is demonstrated in Sec. 5 using simulated and experimental data sets.

The paper proceeds as follows. Section 2 concentrates on the data from sensor networks that is used throughout this work. The specifications of the instrumented railway sleeper used in the experimental aspect of this work is also discussed here. Following this in Sec. 3, an analytical physics-based model for the vertical deflection of the sleeper is introduced. The main methodology of this paper is outlined throughout Sec. 4; this includes the estimation of the parameters within the physics-based model and the construction of a posterior for the response of the sleeper conditioned on data obtained from the sensor networks and physics-based model. This section also discusses by how much the physics-based model should inform the joint model, and proposes a principled approach to this problem. Finally in Sec. 5, simulated data and experimental data from an instrumented railway sleeper provide a demonstration on the effectiveness of the proposed methodology.

2 Data from instrumented structure

This study considers the modelling of a single horizontal prestressed concrete sleeper beam supported on compacted railway ballast. Data is obtained from the sleeper by instrumenting it with a network of fibre-optic sensors (FOS) consisting of Bragg gratings (FBG). FBG sensors interrogate light spectra and reflect them with a specific wavelength known as the ‘Bragg wavelength’; as the fibre optic cable containing the FBG is placed under strain at a given time this Bragg wavelength shifts linearly (from to ). The corresponding strain shift

can then be computed via a linear transformation of this relative wavelength shift,

This strain is measured in units of microstrain, and measurements are recorded at discrete time indices where seconds. Each FBG measures strain to an approximate accuracy of microstrain. A single fibre-optic cable can be inscribed with multiple FBGs. For a comprehensive review of these sensors see [21], which also discusses how FBG sensors are affected by external factors such as temperature.

[width=80mm]schematic5 Figure 1: A schematic of the railway sleeper (supported on compacted ballast), instrumented with six fibre-optic sensors, with Bragg grating (FBG) considered in this study. These sensors are located on the top and bottom of the sleeper at the coordinates , and . The train wheels meet the sleeper at the two loading points, and . [width=75mm]sensordata Figure 2: A 7.52 second sample of data from a fibre-optic sensor with Bragg grating acquiring data at 50Hz during the passage of a train. The sensor is located at along a sleeper.

The railway sleeper considered and used in the experimental results was instrumented during its fabrication with three FBG sensors attached along the top and bottom steel prestressing strands, which are embedded within the concrete sleeper. Coordinates on the sleeper are measured in millimeters. The sleeper is by 200 (length/depth) with the FBG sensors located at , on both the top and bottom prestressing strands embedded within the sleeper. The length of is denoted by for the benefit of notation later in the paper. In this particular application . The distance between the top and bottom rows of FBG sensors on the instrumented sleeper is . In experiments the sleeper was subjected to loading forces to simulate the axle spacing of the train wheels. Two equal forces were exerted by pistons at and , with . The force magnitude was varied over the course of the experiment (see later Sec. 5.2 for details). A schematic of the sensor network instrumented on the sleeper beam is shown in Figure 1. A more detailed specification of the instrumented sleepers considered in this study is given in [44]. For illustrative purposes only, Figure 2 shows a 7.52 second representative data set of strain measurements from an FBG (located at

along the top of an instrumented sleeper). This captures the moment when a train passes over the sleeper, resulting in the visibly large peaks of strain.

Given strain measurements at the time index and one coordinate on both the top (denoted by the superscript ) and bottom (denoted by the superscript ) of the sleeper, the curvature associated with these measurements can be computed by


Therefore the curvature here represents the gradient of strain over the depth of the sleeper. This combines strain measurements from the top and bottom of the sleeper into a single one-dimensional quantity . A one-dimensional domain simplifies the synthesis of this observed data with a one-dimensional physics-based model (presented in the next section). Curvature can be a useful tool in damage detection for beam structures [7].

In practice, curvature data are only recorded at the sensor locations . Denote the curvature data at locations and time index as . We assume the curvature observations are independent realisations from and


where is the true curvature and is the covariance matrix of the Gaussian noise . We assume herein that has a diagonal form, meaning that the noisy curvature observations at the different coordinates in are independent of one another.

We will obtain an alternative model for this curvature, conditioned on both the observed noisy data and information from a physics-based model in Sec. 4. Curvature, , is geometrically related to other useful quantities for engineers, one being the vertical deflection, , of an idealized beam. These quantities are related via,


This differential relationship is used in [43] to estimate the deflection of beam structures from strain measurements. This relationship will be the basis of how curvature data from the instrumented sleeper will be synthesised with a physics-based model for vertical deflection in a joint model for the response of the sleeper under load (described in Sec. 4). A physics-based model for the vertical deflection of the instrumented sleeper considered in this paper is described in the following section.

3 Physics-based model description

Engineers typically use physics-based models to assist in understanding how a structure behaves during excitation and at rest. These physics-based models are framed as solutions to (systems of) ordinary and partial differential equations, where often numerical discretization through finite element methods is required [8, 35]. This section describes an analytical physics-based model for the vertical deflection, , of a railway sleeper supported on compacted ballast [39]. The Euler-Bernoulli equation describes how a one-dimensional beam on the domain , with an elastic foundation, responds under a forcing . Such a response is utilised within the present work as a simplified representation of the response of the considered railway sleeper. The (static-time) Euler-Bernoulli equation is given by,


where the prime notation denotes the derivative with respect to , is the second moment of cross-sectional area and is the Young’s modulus of the beam material (e.g. concrete). The product of and is known as the flexural rigidity of the beam. We make the assumption that the flexural rigidity is constant over , i.e. . See Sec. 6 for a discussion on a possible way to relax this assumption.

An analytic solution to (4) will be the physics-based model used throughout the paper. Physics-based models do not necessarily describe how the physical system operates in practice, e.g. due to structural simplification such as assuming a constant flexural rigidity. To derive the physics-based model for the deflection of a railway sleeper, we assume that during a train passing over the sleeper the forcing has non-zero value at the two points and (experimentally simulated locations of train wheel axles), and (Newtons). A schematic of the sleeper system (including the instrumented sensor network discussed in Sec. 2) is shown in Figure 1. A solution to (4) and an analytical model for the vertical deflection is [19],






Here , , , . Finally , for . Also is the flexibility of the sleeper and is the ballast stiffness. Let the parameters for this formulation of , namely , and (which together lead to

), be the components of the vector

. Using (5), one can rewrite the relationship between curvature and vertical deflection in (3) as


with the linear differential operator . In Sec. 4, we introduce a joint probabilistic model that uses (8) to combine observed curvature data (through strain data obtained from sensors at ; see Sec. 2) and the physics-based model in (5). Figure 3 shows computed numerically (using finite differences) alongside an example 5 simulated curvature data points at each coordinate in , where the values of and used in are given by prior beliefs in (15). The forcing used is (which is approximately equivalent to a typical design train axle load, refer also to [44]).

In our method, we do not use the full analytic solution for all ; instead we simulate from at and combine it with observed data in the joint model. The only requirement here is that one is required to prescribe a value of (estimation is used to circumvent this later in Sec. 4.1). We use simulation from instead of using the full analytic solution (or infinitely many simulation coordinates) to create leverage on how (and where - unmeasured/measured areas of the domain) the physics-based model influences the joint model. If the full solution was used, the joint model would be fully informed by physics, and not by the observed data. In the case of a highly simplified model that poorly represents the observed response of the sleeper, this is detrimental. On the other hand, in the case of a physics-based model that represents the observed response well, it would be desireable to incorporate many simulations into the joint model. These examples inspire the process of ‘tuning’ the choice of ; see Sec. 4.3 for a discussion on this.

The analytical physics-based model presented in (5) represents a simplified response of the railway sleeper. It is used because it is inexpensive to simulate from and incorporates parameters of interest which govern the sleeper/rail trackbed system, including the ballast stiffness. In other cases, a sophisticated FEM model for might be more appropriate; however, the number of times one can simulate from this may be limited by computational expense. The next section describes how the physical simulations discussed in this section and the strain measurement-derived curvature data from the instrumented railway sleeper can be synthesised in a joint Gaussian process model.


Figure 3: An example 5 points of simulated curvature data at each coordinate in along the sleeper. Also shown is numerically computed from the physics-based model of the vertical deflection in a dashed black line. The parameters used in the model are given by prior beliefs in (15). The forcing used is .

4 A data-centric engineering model based on Gaussian processes

This section will describe the methodology synthesising the physics-based model and the curvature data. The modelling of differential equations, such as

(8 revisited)

is an important field of research for physical and engineering applications. In the case considered in this paper, it allows one to model the response of a railway sleeper under forcing by fusing curvature data and physical simulation for vertical deflection on the level of the geometric relationship between them. We will now describe the components of (8). First, recall that represents the true curvature, which we do not have direct access to. Instead, we observe the noisy curvature data located at (see model 2). We will describe later in this section how the observations are used to estimate the true curvature. Second, we define for a prescribed ; hence represents the physics-based model for vertical deflection. A joint model for and using (8) is now described.

A recent landmark paper [26] proposes to learn about differential systems, such as (8), and the parameters within them by training multi-output joint Gaussian process models [20] based on observations from both and . The multiple ‘outputs’ in the model correspond to and , respectively. This joint model is described presently. Assuming a mean-zero Gaussian process prior for ,

one can also construct a Gaussian process model for via the differential operator . The infinite-dimensional stationary covariance kernel used for the prior is

where is an unknown reciprocal length-scale parameter. This is known as the squared exponential covariance function and is a commonly used covariance kernel in many statistical applications given it’s attractive properties, e.g. see [47]. For example, as increases from 0 at , the covariance between the two points decreases from it’s maximum at 1 to approximately 0 when the two points are sufficiently ‘far away’ from each other. This local dependence behaviour is found in many physical systems. Given the form of the covariance kernel, let be the extended parameter set of the system. For and , we define the matrix as

The linear operator can be used to obtain the covariance kernel of the Gaussian process that models , through


Also define the cross-covariance kernels

In this paper, we concentrate on the case where from (8), and therefore the kernels , and are given by


Given the physics-based model simulation coordinates and the coordinates at which noisy strain measurement-derived curvature data is available at

, one can write the joint distribution

, where is as defined in (2), as



from [20, 26]. This model allows one to model the differential system in (8) probabilistically, estimate the parameters and construct point-wise posteriors for either or . The next two sections describe how simulations from the physics-based model for vertical deflection and noisy curvature data can be used to estimate the parameters within the physics-based model and obtain aposteriori estimates for the sleeper response. The model in (9) is a DCE model [22], combining information about two different physical quantities, one from physical simulation and the other from measurement data obtained from instrumented structures.

4.1 Parameter estimation

In this section, we describe the procedure used to estimate the parameters from the observation model in (2); and and the model in (9); . In the case where the forcing term is known, the parameter vector reduces to . This is the case considered in the remainder of the paper. The estimation procedure is performed in two steps. First, the observation model parameters are estimated empirically using the observed curvature data to give and . The type of estimation used for this first step is commonplace for data involved in Gaussian process regression [27]. Second, the parameters are estimated using and and simulations from the physics-based model for . The full estimation procedure is described in Algorithm 1.

0:  Number of simulations from physics-based model; number of observed locations ; observed locations ; observed curvature data ; regularization function ; physics-based model ;
1:  Compute the estimates
2:  Define to be evenly spaced coordinates on .
3:  By defining for , set and solve
4:  return   and
Algorithm 1 Parameter estimation for the multi-output Gaussian Process model

The input is a regularization function which allows for prior belief about to inform the optimization. From a Bayesian perspective, (12) corresponds to maximum aposteriori estimation (MAP), where the objective function is the logarithm of the marginal likelihood in (9). For parameters such as the flexural rigidity of a sleeper, , and ballast stiffness,

, there are extensive guidelines (based on past experimental studies) that specify confidence intervals for them

[41] which may be used to decide

. Define the following log-normal distribution function,


then we assume that the regularization function takes the form


based on order-of-magnitude estimates from [41]. This is the form of that we consider for the remainder of the paper.

In the case of the simple physics-based model for utilised in this study, the optimization problem can be solved using various iterative methods. Simulated annealing [4] is used for the numerical examples presented in the remainder of the paper. Let the estimated parameter vector for the model in (9) be , where and . Then is the concatenation of the optimized physics-based model simulation and the estimated curvature. The estimation procedure above outlines the training of the proposed multi-output Gaussian process model using the batch of noisy curvature observations . Figure 3(a) shows the same as Figure 3 only with the physics-based model for the vertical deflection computed using the estimated parameters and . They were estimated using simulations from the physics-based model in (13). These parameter estimates lead to physics-informed curvature (via the numerically computed ) that is more representative of the data than that from the prior parameter beliefs in (15).

4.2 Generating a posterior

Once obtaining from Algorithm 1, a point-wise posterior at for the curvature of the sleeper given can be computed. The posterior can be interpreted as the physics-based model guiding the inference from the sensor data in regions of the domain which are not instrumented with sensors. Consider the joint distribution (the dependence on , , and is dropped for notational convenience),

Then the posterior is given by [27, 28],



This posterior can be used to predict curvature (via sampling from this normal distribution), detecting system changes and to select the number of simulated data points for (see next two sections). In a similar way, the posterior for the vertical deflection of the sleeper,

, can also be obtained due to the incorporation of the physics-based model, and therefore the observed curvature data can be used to infer a physical quantity that it cannot directly measure. Notice that the posterior variance interestingly does not depend on the simulated values from the physics-based model, but only on the simulation coordinates

. As aforementioned, one could manipulate this observation by selecting the position of the simulation coordinates such that the posterior variance is minimized for future prediction; this is known as active learning

[34]. Figure 3(b) shows the same as Figure 3(a) in addition to the posterior mean and 95% confidence intervals in (16). Notice the greater variance in the posterior for larger values of ; this is a result of the physics-based model agreeing less with the observed curvature data at than at .


(a) The simulated curvature data and model-derived curvature (red) in Figure 3, in addition to computed numerically from the physics-based model for the vertical deflection in a dashed black line. The parameters used in the model and are estimated through Algorithm 1.


(b) The same as Figure 3(a) only with the multi-output Gaussian process posterior mean and 95% confidence intervals as described in (16).
Figure 4:

4.3 Tuning the quantity of simulation from

This section investigates how the similarity between the observed data and physics-based model can be assessed using the point-wise posterior for sleeper curvature. This leads to a principled method of selecting , the number of simulated data-points from the analytical model for , that most improves the predictive performance of the posterior. In the case of an over-simplified physics-based model, we propose using more observed curvature data than the simulations from the crude physics-based model. We now demonstrate the effect that , the number of simulations from , has on the posterior ; this is done via simulated curvature data. Consider the simulated curvature data sampled from

We take the forcing as N. The parameters of the multi-output Gaussian process in (9) are estimated using and Algorithm 1, along with the number of simulated data-points set to , and . The pointwise curvature posteriors over a range of are then given for each model by (16), and the corresponding means and 95% confidence intervals are shown in Figure 5. The blue points show the simulated curvature samples at each point in . For larger values of the posterior is more akin to the physics-based model than the observed data. On the other hand, in the extreme case of , the proposed method becomes the same as standard Gaussian process regression (kriging) on the observed curvature data; the posterior is based solely on the observed data. As increases, the parameter estimates from the associated models change accordingly, for example resulting in the posterior diverting away from the data when . In this case, the posterior is not representative of the observed data since it incorporates too many simulations from the physics-based model. To obtain a posterior more representative of the observed data, fewer simulations from the physics-based model are required, but enough that the posterior has less variance than in the case of . This indicates that a trade-off exists between the number of simulations from , , and the predictive performance of the posterior with respect to the observed data.


Figure 5: The posteriors of curvature corresponding to the Gaussian process models trained around (top), (middle) and (bottom) simulation coordinates from . The points show the simulated samples of curvature data at each point in , and the dashed lines show the 95% confidence interval for the posterior.

The mean squared error is now employed as a metric to evaluate the data/physics information trade-off highlighted here, and to tune the number of simulations from . The objective here is to find the value of that maximises the predictive performance of the posterior in (16), for a prescribed , with respect to a test data-point. Here we suggest to use , derived from the noisy data at , as the test data-point. Equally this could be exchanged with either or . Let using (16), where , and are evenly spaced coordinates along the domain. Here, , , and are estimated by using Algorithm 1 with the inputs and (therefore excluding the noisy curvature observations at the last sensor location). Then the mean squared error of with respect to the test data-point , , is to be minimized,


The mean squared error is the sum of the squared bias of , with respect to , and the variance of ; the hope here is that the posterior corresponding to the minimum in (17) will allow the physics-based model to guide it in the region where data is missing, but not be overconfident. In practice, an interval search can be implemented between two end-points and to find the minimum in (17), for suitable values of and . A summary of how this procedure interacts with the modelling framework set out in Sec. 4.1 and 4.2 is given here:

  • For , implement Algorithm 1 with and as inputs to step 1 to estimate .

  • For each compute the posterior in (16) and record the mean squared error where .

  • Choose the value of corresponding to the lowest mean squared error.

  • Implement Algorithm 1 with this value of , and by using and as inputs to step 1, to obtain the posterior in (16).

Figure 6 shows the posteriors for the same simulated curvature data as used in Figure 5, with , and again. The simulated curvature samples at the last coordinate in , that are excluded from the training of the models, are shown here. One can see the effect of the additional simulation coordinates, for the posteriors corresponding to and , in imparting structure from the physics-based model. Figure 7 shows the mean squared error for the posteriors corresponding to the range of values between 2 and 14 (again computed using the same simulated curvature data as above); this quantifies the predictive performance of the posteriors shown in Figure 5. By using the mean squared error to tune in the way described above, the value of could be used as a proxy to evaluate how well the data fits the physics-based model.

Once has been prescribed, the choice of the coordinates has to be decided. For the remainder of the paper they are chosen to be distributed uniformly along the entire domain . For a more advanced scheme, one could use active learning techniques [34], which prescribes simulation coordinates that minimise the variance of aposteriori estimates. In addition to this, knowledge of where the curvature data are available, , could lead to a wiser decision on the choice of simulation coordinates . For example if all data were known to come from only a single region of the domain, simulations should be taken at points outside this region as to guide the inference from the data in these areas as much as possible.


Figure 6: The posteriors corresponding to the same simulated curvature data as used in Figure 5, for (top), (middle) and (bottom).


Figure 7: The mean squared error of the posteriors with respect to , for the range of values between 2 and 14.

4.4 Detection of system changes

This section will discuss the use of the methodology presented in the previous sections for detecting changes in the modelled system and the parameters within the physics-based model. In terms of the application of this paper, an underlying change in the properties of the sleeper (and it’s foundation) could signify potential damage. Therefore the methods discussed in this section can be used for structural damage detection. The aim of detecting system changes is to signal when the underlying physics (e.g. model parameters) has changed. To achieve this we use batches (length ) of curvature data , for . Then are the sample means, in (10), of the batches. This can be implemented alongside the proposed methodology in this paper via two general approaches:

  • Change in estimated parameters: Fit separate Gaussian process models for the curvature data , for using Algorithm 1, and find estimates for denoted by . Then statistically compare with .

  • Change in log marginal likelihood: Let and find the log marginal likelihood
    . Statistically compare this to .

The former technique is commonly utilised in model updating, for example by using the relative error of with respect to as a metric to detect significant parameter changes over time [33]. On this note, many popular unsupervised change-point detection algorithms can be used alongside the two approaches mentioned to identify a sudden change in parameter estimates/marginal likelihood, such as those presented in [23, 15]

. There are many aspects of these methods to consider, such as the uncertainty in the hypothesis test statistics and the choice of any hard-thresholds. For the results presented in the next section, we will use the metrics discussed here to assess changes in the modelled system over time.

5 Results

The following sections contain example implementations and results of the proposed methodology, for simulated and experimental data under laboratory conditions. These examples both consist of system change detection problems discussed in the previous section.

5.1 Simulated curvature data

This section will demonstrate the effectiveness of the proposed methodology for detecting changes in the modelled system. A data set of 100 data-points are simulated, the first 50 curvature data-points are generated using the values and for the flexural rigidity and the ballast stiffness respectively, and the remaining 50 are generated using the values and . Further we set the force value N. The simulated curvature data, , for , is generated via perturbations to a finite-difference approximation to (8), namely for all ,


where and and . The proposed multi-output Gaussian process model is trained using the first batch (length ) of data, , in Algorithm 1. In order to quantify the uncertainty in detecting simulated parameter changes, given the uncertainty in the data-points, the log marginal likelihood and parameter estimates in the following sections will be computed over 20 independently sampled data sets.

5.1.1 Changing the flexural rigidity

In the first case we set , and ; these values are of the same order of magnitude as those prior beliefs (see 15) for the sleeper system considered in this paper. Thus, the flexural rigidity is reduced at the midpoint of the simulation and the ballast stiffness is kept fixed. A simulated reduction, as opposed to an increase, in the flexural rigidity (EI) is used to represent the cracking of concrete, which occurs at the later stages during the experimental test in the next section. The tuning procedure outlined in Sec. 4.3 is used to specify the value of used for the training of the proposed model and this results in . Using the training data and Algorithm 1, the estimates are obtained. Due to the given variance in the simulated data-points from (18), it is not expected that the estimates will be exactly the values prescribed, and .

Next, the log marginal likelihood is computed, recalling that from Sec. 4.4, where is the sample mean in (10) of the batch , for . Figure 8 shows these log marginal likelihoods over all batches of data (and all 20 sampled data sets). Also shown in this figure are values of and (over all data sets) obtained by fitting separate Gaussian process models on each batch of simulated data points , for , using Algorithm 1. Interestingly here, the results correctly detect that is reduced after the true change occurs at the 50’th data point and that stays approximately constant throughout. Also note the clear decrease in likelihood after the true change in occurs.


Figure 8: The box-plots (over 20 independently sampled data sets) of the log marginal likelihood and estimates and for each batch () of simulated curvature data points. As the true change in occurs after the 50’th data point, the likelihood decreases.

5.1.2 Changing the ballast stiffness

In the second case we set , and . Thus the ballast stiffness now changes at the midpoint of the simulation and the flexural rigidity is kept fixed. As done in the previous case with varying flexural rigidity, the log marginal likelihoods in addition to the estimates and are obtained for each batch of simulated curvature data points (over 20 independently sampled data sets) , for . These quantities are all shown in Figure 9. Once again, the likelihoods and the values of significantly decrease after the true change in occurs. In addition to this behaviour, the new estimates also stay approximately constant as expected.


Figure 9: Same as Figure 8 only for a change in for the second set of 50 simulated data points.

5.1.3 Receiver operating characteristic analysis

A receiver operating characteristic (ROC) analysis [11] will now be carried out to demonstrate the sensitivity of the log marginal likelihood shown in both Figures 8 and 9 to any changes in the parameters and during the second set of 50 data points. This analysis evaluates how large the change in the parameters is required to be in order to produce a significant shift in the log marginal likelihood. The parameters are changed from and to and respectively after the 50’th simulated data point as done in Sec. 5.1, and are varied as follows: and . Therefore there are different parameter combinations after the change in this analysis. We now define the signalling procedure, based on the log marginal likelihood , to flag changes in the model parameters. Fitting a normal distribution to the log marginal likelihood from using the first batch of data points, , to train the Gaussian process through Algorithm 1, we define and

to be the empirical mean and standard deviation over 1000 independent data sets in (

18). Then the test-statistic for the signalling procedure we use is based on the log marginal likelihood , for , and is given by


is a Gaussian distribution function with mean

and standard deviation . Then using a threshold to signal a change in a log marginal likelihood, define the true positive detections as an actual change in the true parameters when one is signalled; i.e. . Similarly, define the false positive detections as no actual change in the true parameters when one is signalled; i.e. , where . For a fixed parameter combination the detection rates are computed over 1000 independent data sets, and , for :


A ROC curve is the graph of against for varying . Figure 10 shows these curves for the parameter change combinations of and . The area under each curve (AUC),

, in this case represents the probability of the detecting a random instance of a change in the parameters with respect to the test statistic used. The AUC for all parameter combinations is summarized in Figure

11. This plot shows the magnitude of parameter changes that the detection utilising the log marginal likelihood is sensitive to; this includes the values of and prescribed in Sec. 5.1.1 and 5.1.2 respectively. The ROC analysis presented here has allowed us to evaluate the sensitivity of the log marginal likelihood from the proposed multi-output Gaussian process model to parameter changes within the physics-based model. For example, the AUC suggests that a true decrease of 10% in both parameters, i.e. and , is 80% likely to be detected. Figure 12 show boxplots for the simulated curvature data, for each location in , in (18) corresponding to this 10% decrease in the parameters. Note that the 10% change in the true parameters is difficult to detect by visual inspection, however the signalling procedure detects this change with high probability.

[width=78mm]roc_curves Figure 10: ROC curves showing the true positive detection rate (TPR) and false positive detection rate (FPR) in (19). They are computed for the parameter change combinations