The role of surrogate models in the development of digital twins of dynamic systems

01/25/2020 ∙ by Souvik Chakraborty, et al. ∙ indian institute of science The University of British Columbia Swansea University 11

Digital twin technology has significant promise, relevance and potential of widespread applicability in various industrial sectors such as aerospace, infrastructure and automotive. However, the adoption of this technology has been slower due to the lack of clarity for specific applications. A discrete damped dynamic system is used in this paper to explore the concept of a digital twin. As digital twins are also expected to exploit data and computational methods, there is a compelling case for the use of surrogate models in this context. Motivated by this synergy, we have explored the possibility of using surrogate models within the digital twin technology. In particular, the use of Gaussian process (GP) emulator within the digital twin technology is explored. GP has the inherent capability of addressing noise and sparse data and hence, makes a compelling case to be used within the digital twin framework. Cases involving stiffness variation and mass variation are considered, individually and jointly along with different levels of noise and sparsity in data. Our numerical simulation results clearly demonstrate that surrogate models such as GP emulators have the potential to be an effective tool for the development of digital twins. Aspects related to data quality and sampling rate are analysed. Key concepts introduced in this paper are summarised and ideas for urgent future research needs are proposed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 10

page 11

page 12

page 13

page 16

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A digital twin is a virtual model of a physical system which exists in the computer cloud. Increasingly, this physical system is called a physical twin [1]

. Attempts to emulate the behaviour of physical systems are a key component of engineering and science. However, the difference between the digital twin and a computer model is that the digital twin updates itself to track the physical twin through the use of sensors, data analysis, machine learning, and the internet of things. The digital twin may also direct changes to the physical twin through signals sent to actuators on the physical twin. Ideally, physical and digital twins must achieve temporal synchronization.

The widespread connectivity of physical systems to the internet allows the tracking of millions of physical twins in real-time. However, the challenge is to ensure that the digital twin mimics the physical system as closely as possible with the evolution of time. In other words, the digital twin must evolve with the physical twin over its life cycle. At the end of life, both digital and physical twins will terminate. The physical twins exist in the real world and the digital twin typically exists in the cloud [2]. Important problems in the development of digital twin technology are related to modelling and simulation of the physical twin, accuracy and speed of data transmission between the physical twin and the cloud where the digital twin resides, storage and processing of big data created by sensors through the life cycle, efficient computer processing of the digital twin model in the cloud, and possibly the actuation of the physical twin through signals sent by the digital twin from the cloud to actuators on the physical twin.

Several authors have considered the concept of digital twin over the last two decades. An early paper on digital twins in the context of life prediction of aircraft structures was published by Tuegel and his co-workers [3]. Their objective was to use an ultrahigh fidelity model of an individual aircraft identified by its tail number to predict its structural life. This research was motivated by the growing power of computers and the evolution of high-performance computing. The key idea introduced in this paper was the need to track each individual aircraft by its tail number, which is a unique identifier. Thus, every physical aircraft in the fleet would have a separate digital twin, which will evolve differently compared to the other twins as each aircraft faces unique combinations of missions, loads, environment, pilot behaviour, sensor situation, etc. This concept of digital twins was later expanded to any physical system such as cars, computer servers, locomotives, turbines, machine tools, etc. The primary applications of digital twin have been in production, product design, and prognostics and health management (PHM) [4, 5, 6, 7, 8].

Digital twin technology has found many industrial applications, as summarized by Tao et al in their recent review paper [9]. They mention that digital twin is an enabling technology for realizing smart manufacturing and Industry 4.0. In Industry 4.0, factories are envisaged with wireless connectivity and sensors which allow the production line to function automatically. As physical systems become embellished with sensors, data transmission technology allows the collection of data throughout the life stage of the physical system. This data set is typically very large, and big data analytics is needed to find failure causes, streamline supply chains and enhance production efficiency. According to this review paper, prognostics and health management is the area which has seen the maximum application of digital twin technology. Li et al [10]

created a digital twin based on the dynamic Bayesian network to monitor the operational state of aircraft wings. A probabilistic model was used to replace the deterministic physical model. Digital twins have also been used in PHM of cyber-physical systems and additive manufacturing processes. The fusion of the physical and the cyber systems remains the key challenge for the use of digital twins and one approach to address this problem is to make the modelling and simulation tasks more efficient. Surrogate modelling could be one approach to increase the efficiency of the simulation process.

In [11]

, the authors proposed a physics-based digital twin model for a dynamical system. The development was based on the assumption that the properties of the physical system typically evolve much more slowly than real-time. This evolution thus takes place in slow time as compared to real-time. A single degree of freedom system model was created to explain the concept of digital twin and study the effect of changes in system mass and stiffness properties. Closed-form solutions to obtain the evolution of mass and stiffness with the slow time was proposed. In case, the data collected is clean, the proposed closed-form solutions yield exact estimates of the mass and stiffness evolution. However, the proposed physics-based digital twin has two major limitations. First, the physics-based digital twin proposed only yields the mass and stiffness at the slow time-step, i.e., the time-steps at which we have sensor measurements. In other words, the physics-based digital twin is not capable of providing the mass and stiffness at the intermediate and future time. Second, the physics-based digital twin can fail if the data collected is contaminated by noise. This is a major limitation as in real-life, data collected is always corrupted by noise.

From this discussion, it is clear that there is a lack of clarity and specificity regarding digital twins. Although there exists a broad consensus about what a digital twin is, there is almost no detailed methodology on how to develop one for a given system. In this work, we introduce the concept of surrogate models within the digital twin framework. This is motivated by the fact that a physics-driven digital twin, such as the one proposed in [11, 12] can only provide estimates at certain discrete time-steps (the time-steps corresponding to sensor measurements). Moreover, such digital twins are likely to yield erroneous results when the data collected is corrupted by complex noise. By definition, a surrogate model can be considered to be a proxy

to the actual high-fidelity model. Popular surrogate models available in the literature include analysis-of-variance decomposition

[13], polynomial chaos expansion [14, 15]

, support vector machines

[16, 17, 18]

, artificial neural networks

[19, 20, 21] and Gaussian process (GP) [22, 23, 24]. Successful use of surrogate models can be found in various domains including, but not limited to, stochastic mechanics [25, 26], reliability analysis [27, 28] and optimisation [29, 30, 31]. We here illustrate how surrogate models can be integrated with the digital twin technology and what possible advantages can be achieved from this integration.

Although all the surrogate models discussed above can be used within the digital twin framework, we explore the use of GP. In essence, GP is a probabilistic machine learning technique that tries to infer a distribution over functions and then utilise the same to make predictions at some unknown points. The advantage of GP over other surrogate models is two-fold. Firstly, GP being a probabilistic surrogate model is immune to over-fitting. This is extremely important for the digital twin as the data collected are corrupted by complex noise and any form of over-fitting will result in erroneous predictions. Secondly, GP is able to quantify the uncertainty due to limited and noisy data. This, in turn, can be used in the decision-making process using a digital twin. In this paper, we have illustrated the use of vanilla GP as a surrogate with the digital twin model.

The paper is organized as follows. In Section 2, the equation of motion of an SDOF digital twin is introduced using multiple time-scales. The fundamentals of GP are discussed in Section 3. The development of a digital twin using only the mass evolution is considered in Section 4. Three separate cases with (a) only mass evolution, (b) only stiffness evolution and (c) both mass and stiffness evolution are considered. Numerical examples are given to illustrate the proposed ideas. Some critical discussion on the proposed methodology as well as the overall development of digital twin is given in Section 5 and concluding remarks are given in Section 6.

2 The dynamic model of the digital twin

A single degree of freedom dynamic system is considered to explore the concept of a digital twin. A key idea is that a digital twin starts its journey from a ‘nominal model’. The nominal model is therefore, the ‘initial model’ or the ‘starting model’ of a digital twin. For engineering dynamic systems, the nominal model is often a physics based model which has been verified, validated and calibrated. For example, this can be a finite element model of a car or an aircraft when the product leaves the manufacturer facility and is ready to go into service. Another key characteristics of a digital twin is its connectivity. The recent development of the Internet of Things (IoT) brings forward numerous new data technologies and consequently drives the development of digital twin technology. This enables connectivity between the physical SDOF system and its digital counterpart. The basis of digital twins is based on this connection, without it, digital twin technology cannot exist. This connectivity is created by sensors on the physical system which obtain data and integrate and communicate this data through various integration technologies.

We consider a physical system which can be well approximated by a single degree of freedom spring, mass and damper system as before [11]. It is reasonable to assume that the sensors sample data intermittently, typically, represents discrete time points. It is assumed that changes in , and as so slow that the dynamics of the system is effectively decoupled from these functional variations. The equation of motion can be written as

(1)

Here and are the system time and a “slow time", respectively. is a function of two variables and therefore the equation of motion is expressed in terms of the partial derivative with respect to the time variable . The slow time or the service time can be considered as a time variable which is much slower than .For example, it could represent the number of cycles in an aircraft. Thus, mass , damping , stiffness and forcing change with , for example due to system degradation during its service life. The forcing is also a function of time and slow time , as is the system response . Eq. 1 is considered as a digital twin of a SDOF dynamic system. When , that is at the beginning of the service life of the system, the digital twin Eq. 1 reduces to the nominal system

(2)

where , , and are respectively the mass, damping, stiffness and force at . It is assumed that sensors are deployed on the physical system and take measurements at locations of time defined by . The functional form of the relationship of mass, stiffness and forcing with is unknown and needs to be estimated from measured sensor data. We assume that damping is small so that the effect of variations in is negligible. In effect, only variations in the mass and stiffness are considered. Without any loss of generality, the following functional forms are considered

(3)

In general is expected to be a decaying function over a long time to represent a loss in the stiffness of the system. On the other hand, can be an increasing or a decreasing function. For example, in the context of aircraft, it can represent the loading of cargo and passengers and also represent the use of fuel as the flight progresses. The following representative functions have been chosen as examples

(4)
(5)

Here SawTooth represents a sawtooth wave with a period . In Fig. 1 overall variations in stiffness and mass properties arising from these function models have been plotted as function of time normalised to the natural time period of the nominal model.



Fig. 1: Examples of model functions representing long term variabilities in the mass and stiffness properties of a digital twin system.

Numerical values used for these examples are , , , and . The choice of these functions is motivated by the fact that the stiffness degrades over time in a periodic manner representing a possible fatigue crack growth in an aircraft over repeated pressurisation. On the other hand, the mass increases and decreases over the nominal value due to re-fuelling and fuel burn over a flight period. The key consideration is that a digital twin of the dynamical system should track these types of changes by exploiting sensor data measured on the system.

3 Overview of Gaussian process emulators

One of the most popular machine learning techniques is the Gaussian process (GP) [22, 23]. Gaussian process has been successfully used in the context of structural dynamic analysis [32] and finite element method [33]. Unlike frequentist machine learning techniques such as the analysis-of-variance decomposition [13], support vector machine [17] and neural networks [20], GP adopts an optimal approach and tries to infer a distribution over functions and then utilise the same to make predictions at some unknown points [34]. We consider, as the input variable and

(6)

to be a set of noisy measurements of the response variable, where

represents the noise. With this setup, the objective is to estimate the latent (unobserved) function that will enable prediction of the response variable, at new values of . In GP based regression, we define a GP defined over with the mean and covariance function

(7)

In the above equation

(8)

and

denotes the hyperparameters of the covariance function

. The choice of the covariance function allows encoding of any prior knowledge about (e.g., periodicity, linearity, smoothness), and can accommodate approximation of arbitrarily complex functions [35]. The notation in Eq. 7

implies that any finite collection of function values has a joint multivariate Gaussian distribution, that is

, where is the mean vector and is the covariance matrix with for . If no prior information is available about the mean function, it is generally set to zero, i.e. . However, for the covariance function, any function that generates a positive, semi-definite, covariance matrix is a valid covariance function. With this set-up, the objective of GP is to estimate the hyperparameters, , based on the observed input-output pairs, , where is the number of training samples. In general, this is achieved by maximising the likelihood of the data. Alternatively, one can choose to adopt a Bayesian approach to compute the posterior of the hyperparameters, . Once have been computed, the predictive distribution of given the dataset , hyperparameters and new inputs, is represented as

(9)

where

(10)

Next, we employ GP to obtain digital twin of a single degree of freedom damped dynamic systems.

4 Gaussian process based digital twin

The development of digital twin concept hinges on the advances in the sensor technology. Using modern sensors, it is possible to collect different types of data such as acceleration time-history, displacement time-history etc. In this paper, we assume that using sensors, the natural frequency of the system described in Eq. 1 have been measured at some distinct time-steps. Wireless sensor technology is making such measurements and transmission possible [36]. For example, Wang et al. [37] showed that the natural frequency of a system can be measured by using sensor technology, a fact that we use in this paper.

4.1 Digital twin via stiffness evolution

4.1.1 Formulation

In the first case, we assume that the change in the natural frequency with time is due to the deterioration of the stiffness of the system only. With this, the equation of motion of the digital twin is represented as

(11)

where

(12)

Eq. 11 is a special case of Eq. 1. Note that in a practical scenario, we generally have no prior information about how the stiffness will deteriorate. We postulate that the deterioration of stiffness can be represented by using a GP

(13)

However, at this stage, we have no measurements corresponding to and hence, it is not possible to estimate the hyperparameters, of the GP. Instead, if we substitute Eqs. 12 and 13 into Eq. 11,the governing differential equation of the digital twin can be represented as

(14)

Solving Eq. 14, the natural frequency of the system is obtained as

(15)

where and are the natural frequency and damping ratio of the system at . Rearranging Eq. 15, we have

(16)

where is the evolution of the natural frequency, is the evolution of the damping factor and is the evolution of the damped natural frequency with slower time-scale . Since most of the natural frequency extraction techniques extract the damped natural frequency of the system, we assume that the data obtained using the sensor to be the same. Following Ganguli and Adhikari [11], it can be shown that

(17)

where

(18)

The function in Eq. 18 is the distance between and

(19)

Now given the fact that the initial damped frequency of the system, is known and we have sensor measurements for , one can easily compute and at by using Eq. 18 and substituting it into Eq. 17. Having said that, it is only fair to assume that the sensor measurements, , will be corrupted by some noise and hence, the estimates of are also noisy. Nonetheless, by considering these noisy estimates of to be training outputs corresponding to the inputs , we obtain the hyperparameters () of the GP described in Eq. 13. In this work, we use Bayesian optimisation [38] to maximise the likelihood of the data. The hyperparameter completely describes the digital-twin in Eq. 14. One important question associated with GP is the form of the covariance function and the mean function. In this work, we have considered a pool for the mean function and the covariance function and then selected the most suitable mean function and covariance function based on Bayesian model selection. The possible candidates for the mean function and the covariance functionare shown in Table 1. Functional form of these covariance kernels can be found in [35]. For further details of Bayesian model selection, interested readers may refer to [34].

Functions Candidates
Mean function Constant, Linear, Quadratic
Covariance function Exponential, Squared Exponential, Matern 3/2, Matern 5/2, Rational Quadratic, ARD Exponential, ARD Squared Exponential, ARD Matern 3/2, ARD Matern 5/2, ARD Rational Quadratic
Table 1: Candidate functions for mean and covariance functions in GP

4.1.2 Numerical illustration

To illustrate the applicability of GP based digital twin with only stiffness evolution, an SDOF system with nominal damping ratio is considered. We assume that the sensor data is transmitted intermittently with a certain regular time interval. For simulating the variation in the natural frequency, we consider the change in the stiffness property of the system shown in Fig. 1. Figure 2(a) shows the actual change in the damped natural frequency of the system with time. The data-points available for the digital twin are also shown. At this stage, it is important to emphasise that the frequency of data availability depends on several factors such as the bandwidth of the wireless transmission system, cost of data collection, etc. In this case, we have considered 30 data-points. Figure 2(b) shows the GP based digital twin model with clean data.We observe that the GP based digital twin perfectly captures the time evolution of with utmost confidence. However, we will like to emphasise that this is an imaginary scenario as it is almost impossible to have measurements with no noise.

(a) Changes in the damped natural frequency with time
(b) GP assisted digital twin with clean data
Fig. 2: Changes in the (damped) natural frequency and the GP based digital twin obtained using the exact data plotted as a function of the normalised slow time . Using Bayesian model selection, ‘linear’ based function and ‘ARD Rational quadratic’ covariance functions are selected. The estimated hyperparameters from Bayesian optimisation are and .

Now we consider a more realistic case where the data collected is corrupted by noise. For illustration purposes, a zero-mean Gaussian white noise with three different standard deviations is considered: (a)

, (b) and (c) . Figures 3((a)–(c)) shows the GP based digital twin model corresponding to the three cases. For , the GP based digital twin successfully captures the evolution of stiffness with time. With the increase in the noise level, a slight deterioration in the performance of the GP based digital twin is observed. Nonetheless, even when , the GP based digital twin captures the time-evolution with high accuracy. An additional advantage of GP based digital twin resides in its capability to capture the uncertainty arising due to limited data and noise. This is illustrated by a shaded plot in Fig. 3. We observe that with increase in noise level, the uncertainty also increases.

To illustrate the importance of having more data, a case with 100 data-points is shown in Fig. 3(d). We observe that with the increase in the number of observations, the GP based digital twin can capture the evolution of stiffness in a more accurate manner even for the highest level of noise in the data.

(a) 30 observations and
(b) 30 observations and
(c) 30 observations and
(d) 100 observations and
Fig. 3: GP based digital twin constructed with noisy data as a function of the normalised slow time . For (a) - (c), Bayesian model selection selects ‘constant’ basis function. ‘ARD Rational quadratic’, ‘ARD squared exponential’ and ‘ARD Matern 3/2’ covariance kernels are selected for cases (a) to (c) respectively. For (d), Bayesian model selection yields ‘Constant’ basis with and ‘ARD Matern 3/2’ covariance kernel with .

4.2 Digital twin via mass evolution

4.2.1 Formulation

In the second case, we consider that the time-evolution of the natural frequency is due to the change in the mass of the system. Subsequently, the equation of motion of the digital twin can be represented as

(20)

where

(21)

Similar to case 1, we assume

(22)

substitute Eqs. 21 and 22 into Eq. 20 and solve it to obtain the natural frequency of the system

(23)

where

(24a)
(24b)
(24c)

are the evolution of natural frequency, damping ratio and damped natural frequency of the digital twin. Again, following a procedure similar to case 1, we obtain

(25)

where is the equivalent of for the mass evolution case. Similar to case 1, obtained using Eq. 25 will be noisy in nature. We consider this noisy estimates of to be the training outputs and to be the training inputs for estimating the hyperparameters, , of the GP defined in Eq. 22. This is achieved by maximising the likelihood of the data. For determining the best mean function and covariance function, Bayesian model selection as before has been adopted. Once the hyperparameter, , has been estimated, the digital twin model defined in Eq. 20 is completely deciphered.

4.2.2 Numerical illustration

To illustrate the applicability of the GP based digital twin model for mass evolution, we revisit the SDOF example considered for stiffness evolution. For simulating the variation in the natural frequency, we consider the change in the mass of the system shown in Fig. 1. Figure 4(a) shows the actual change in the damped natural frequency of the system with time. The data-points available for the digital twin are also shown. Note that the evolution of mass with time is more complex and hence, unlike the stiffness evolution case, only 30 observations is inadequate. Therefore, we have assumed access to more data points in this case. Figure 4(b) shows the GP based digital twin constructed from clean data. It is observed that the GP based digital twin is able to capture the time evolution of mass with high accuracy. Also the uncertainty due to limited data is adequately captured. However, as already stated, this is an unrealistic case as for a practical scenario, data collected will always be corrupted by some form of noise.

(a) Changes in the (damped) natural frequency
(b) GP based digital twin
Fig. 4: Changes in the (damped) natural frequency and the GP based digital twin obtained using the exact data plotted as a function of the normalised slow time . Using Bayesian model selection, ‘Constant’ basis function and ‘ARD Matern 5/2’ covariance functions are selected. The estimated hyperparameters from Bayesian optimisation are and .

Next, we consider a more realistic case where the data collected is corrupted by noise. Figure 5 shows the GP based digital twin model corresponding trained with 100 noisy observations and . Some discrepancy between the GP based digital twin and the physical twin is observed. For improved performance, we increase the number of observations to 150. The GP based digital twin corresponding to the three noise level are shown in Fig. 6(a) – (c). In this case, the GP based digital twin yields accurate result. Lastly, Fig. 7 shows GP based digital twin with 200 samples and . This yields the best result with accurate time evolution of mass and uncertainty quantification. Overall, this example illustrates the importance of sampling rate and denoising of data for digital twin technology.

Fig. 5: GP based digital twin obtained using 100 noisy data with , plotted as a function of the normalised slow time
(a)
(b)
(c)
Fig. 6: GP based digital twin obtained using 150 noisy data plotted as a function of the normalised slow time . For (a) - (c), Bayesian model selection selects ‘Constant’ basis and ‘ARD Matern 5/2’ covariance kernel.

ht!]

Fig. 7: GP based digital twin obtained using 200 noisy data with plotted as a function of the normalised slow time . Bayesian model selection yields ’Constant’ basis with and ‘ARD Matern 5/2’ covariance kernel with .

4.3 Digital twin via mass and stiffness evolution

4.3.1 Formulation

As the last case, we consider simultaneous evolution of the mass and stiffness. The governing differential equation for this case is represented as

(26)

where and are represented by Eqs. 12 and 21, respectively. We represent and by a multi-output GP.

(27)

Substituting Eqs. 27, 12 and 21 in Eq. 26 and solving, we obtain

(28)

where

(29a)
(29b)
(29c)

Note that unlike the previous two cases, it is not possible to compute the hyperparameters of the GPs based on measurements of damped natural frequencies only. Therefore, we take a different route by considering the real and imaginary part in Eq. 28 separately. Following Ganguli and Adhikari [11], it can be shown that

(30a)
(30b)

where and , as before, are distance measures

(31)

We emphasise that Eq. 30 provides estimates and based on noisy measurements, and hence, are also noisy. Using this noisy estimates as training outputs and as the training inputs, we estimate the hyperparameters, of the multi-output GP defined in Eq. 27. This is achieved by maximising the likelihood of the data as described in Section 3. For determining the best mean function and covariance function, Bayesian model selection is adopted as before. The hyperparameters, once estimated completely defines the digital twin described in Eq. 26.

4.3.2 Numerical illustration

We revisit the previously studied SDOF system to illustrate the applicability of GP based digital twin for simultaneous mass and stiffness evolution.For simulating the variation in the natural frequency, we consider the change in the mass of the system shown in Fig. 1. Figure 8 shows the actual change in real and imaginary parts of the natural frequency of the system over time. Similar to previous cases, the damping ratio is assumed to be 0.05. Figure 9 shows the GP based digital twin constructed from clean data. The trained GP based digital twin is able to capture the time evolution of mass and stiffness. However, data with no noise is an unrealistic scenario as, even with the most advanced sensors, the data collected will always be noisy [39].

(a) Changes in real part of natural frequency
(b) Changes in imaginary part of natural frequency
Fig. 8: Normalised changes in the real and imaginary parts of the natural frequency as a function of the normalised slow time .
(a) mass function
(b) stiffness function
Fig. 9: GP based digital twin obtained from exact data via simultaneous mass and stiffness evolution as a function of the normalised ‘slow time . Bayesian model selection yields ‘Linear’ basis and ‘squared exponential’ covariance kernel.

Next, we consider a more realistic case where the sensor data is corrupted by noise. Similar to the previous two cases, we consider three noise levels. Figure 10 shows the mass and stiffness evolution of the GP based digital twin trained with 37 noisy observations.The observations are corrupted by white Gaussian noise with . It is observed that the GP based digital twin is able to capture the time evolution of stiffness with high accuracy. However, it fails to capture the mass evolution in an adequate manner. This is expected as the mass evolution function is quite complex and it is unreasonable to expect that with so little data, GP will be able to track the mass evolution. Note that the uncertainty due to limited and noisy data is perfectly captured and hence, the true solution resides within the shaded portion.

(a) mass function
(b) stiffness function
Fig. 10: GP based digital twin obtained from 37 noisy data with via simultaneous mass and stiffness evolution as a function of the normalised slow time . Bayesian model selection yields ‘Linear’ basis and ‘squared exponential’ covariance kernel.

Figures 11 and 12 show the evolution of mass and stiffness of the GP based digital twin trainedwith 150 noisy observations. For mass evolution, results corresponding to all three noise levels are shown. With the increase in the number of observations, we observe a dramatic improvement in the GP assisted digital twin. The evolution of stiffness is shown only for and results obtained are highly accurate.Lastly, evolution of mass with 200 noisy observations and is shown in Fig. 13. As expected, this yields the best results. The evolution of stiffness for 200 observations are found to be similar to those with 150 samples and hence, the same is not shown in this paper.

(a)
(b)
(c)
Fig. 11: Mass evolution as a function of the normalised slow time for GP based digital twin (simultaneous mass and stiffness evolution) obtained from 150 noisy data. Noise levels of , and are considered. Bayesian model selection yields ‘Linear’ basis and ‘ARD Matern 5/2’ covariance kernel.
Fig. 12: Stiffness evolution as a function of the normalised slow time for GP based digital twin (simultaneous mass and stiffness evolution) trained with 150 noisy data with . Bayesian model selection yields ‘Linear’ basis and ‘ARD Matern 5/2’ covariance kernel.
Fig. 13: Mass evolution as a function of the normalised slow time for GP based digital twin (simultaneous mass and stiffness evolution) trained with 200 noisy data with . Bayesian model selection yields ‘Linear’ basis and ‘ARD Matern 5/2’ covariance kernel.

5 Discussion

Although in theory, a digital twin of a physical system can be achieved in several ways, most of the existing works on digital twin have focused on the broader conceptual aspect. In this paper, we take a different path and focus on a specific case of a structural dynamical system. More specifically, we have considered surrogate models such as GP with a single degree of freedom system. The key ideas proposed in this paper include:

  • We introduce the concept of a surrogate model within the digital twin technology. In particular, we advocated the use of GP within the digital twin technology. The GP was trained based on observations on a slow time-scale and was used for predicting the model parameters in the fast time-scale.

  • The importance of collecting more data during the system life-cycle is illustrated in this study. A dramatic improvement in the performance of the digital twin was observed with an increase in the number of data collected. In other words, over the same window of time, a higher sampling rate is needed.

  • The importance of cleaner data in the context of a digital twin is illustrated in this paper. With less noise in the data, the digital twin can quickly track the time evolution of the model parameters. Digital signal processing algorithms can be used for data cleaning [40, 41].

  • GP, being a Bayesian surrogate model, can quantify the uncertainty in the system due to limited data and noise. These uncertainties can be used for deciding if there is a need to increase the frequency of data collection, i.e. increase the measurement sampling rate.

  • Overall, GP was able to capture both mass and stiffness variation from limited noisy data; although, for capturing the time evolution of mass, more observations are needed as compared to the stiffness evolution case.

The system studied here is a simple single-degree-of-freedom dynamical system governed by second-order ordinary differential equations. The same framework, as it is, applies to other physical systems (e.g., simple electrical systems involving a resistor, capacitor and an inductor) governed by this kind of equations. The framework presented here can be extended for more rigorous investigations encompassing a wider variety of practical problems. Some of the future possibilities are highlighted below:

  1. Surrogate assisted digital twin for big data: In this paper, natural frequency measurements are used for establishing the GP based digital twin. However, it will be more useful if the GP based digital twin can be trained directly from the time-history of measured responses. This is essentially a big-data problem and vanilla GP is unable to tackle such problems. More advanced versions of GP, such as the sparse GP [42] and convolution GP [43] may be explored in that case.

  2. Surrogate based digital twin for continuum system:

    For many engineering problems, a continuum model represented by the partial differential equation is the preferred choice. Developing a surrogate-based digital twin for such a system will be of great use. This is primarily a sparse data scenario as sensor responses will only be available at few spatial locations. Methods such as convolution neural networks

    [44] can be used for such systems.

  3. Digital twin for systems with unknown/imperfect physics: In literature, there exists a wide range of problems for which the governing physical law is not well defined. It will be interesting and extremely useful to learn/develop the digital twin purely based on data. Works on discovering physics from data by using machine learning can be found in the literature [45]

    . Genetic programming can also be used to fit mathematical functions to data

    [46].

  4. Surrogate based digital twin models with machine learning: Over the past few years, the machine learning community has witnessed rapid progress with developments of techniques such as deep neural networks [47], convolution neural networks [44], among others. Using these techniques within the digital twin framework can possibly push this technology to new heights. The required sampling rate and data quality could be reduced by using machine learning methods.

  5. Multiple-degrees-of-freedom (MDOF) digital twins using surrogate models: In this paper, we focused on a single-degree-of-freedom (SDOF) model, which can be considered to be a simple idealisation of complex multiple-degrees-of-freedom (MDOF) systems. However, for more effective digital twin with realistic predictive capabilities, MDOF digital twin should be considered. To that end, it is necessary to develop surrogate-based MDOF digital twins.

  6. Surrogate models for nonlinear digital twins: The model considered in this paper is a linear ordinary differential equation. However, many physical systems exhibit nonlinear behaviour. A classical example of a nonlinear system is the Duffing oscillator [48]. For such a non-linear system, it is unlikely that a closed-form solution will exist and hence, developing a surrogate-based digital twin model will be immensely useful.

  7. Forecasting using surrogate models for digital twins: One of the primary tasks of a digital twin is to provide a future prediction. With the physics-based digital twin proposed previously [11], it is not possible to predict the future responses. The GP based digital twin model, on the contrary, can predict the short term future response. It is necessary to develop surrogate-based digital twin capable of forecasting the long-term response of the system.

  8. Predicting extremely low-probability catastrophic events for digital twins with surrogate models:

    Till date, application of digital twin technology is mostly limited to maintenance, prognosis and health monitoring. Other possible applications of digital twin involve computing probability of failure, rare event probability and extreme events.

  9. High-dimensional surrogate models for multi time-scale digital twins: In this paper, we assumed one time-scale for the evolution of the digital twin. However, there is no physical or mathematical reason as to why this must be restricted to only one time scale. It is possible that the parameters may evolve at different time-scales. Such systems are known as multi-scale dynamical system [49]. Developing surrogate-based digital twin for such multi-scale dynamical systems needs further investigation.

  10. Hybrid surrogate models for digital twins: In this paper only one type of surrogate model, namely, the Gaussian process emulator was used. Several types of surrogate models have been employed to solve a wide range of complex problems with different numbers of variables [24, 46]. It is well known that some surrogate models perform better than the others in certain situations. It is therefore perfectly plausible to employ multiple surrogate models simultaneously by taking the benefit of their relative strengths. Such hybrid surrogate digital twins are expected to perform in a superior manner compared to single surrogate digital twins.

6 Conclusions

Surrogate models have been developed over the past four decades with increasing computational efficiency, sophistication, variety, depth and breadth. They are a class of machine learning methods which thrive on the availability of data and superior computing power. As digital twins also expected to exploit data and computational methods, there is a compelling case for the use of surrogate models in this context. Motivated by this synergy, we have explored the possibility of using a particular surrogate model, namely, the Gaussian process (GP) emulator, for the digital twin of a damped single-degree-of-freedom dynamic system. The proposed digital twin evolves at a time-scale which is much slower than the dynamics of the system. This makes it possible to identify crucial system parameters as a function of the ‘slow-time’ from continuously measured data. Closed-form expressions derived considering the dynamics of the system in the fast timescale have been employed. The Gaussian process emulator is employed in the slow timescale, where the impact of lack of data (sparse data) and noise in the data have been explored in detail.

The results arising from applying the Gaussian process emulator show that sparsity of data may prevent the GP from accurately capturing the evolution of mass and stiffness. This is particularly true for the mass evolution case where a higher sampling rate is necessary for accurately capturing the time evolution of mass. It is to be noted that the uncertainty in the GP based digital twin model results from both sparsity in the data and noise in the measurements. Consequently, only increasing the number of observations may not necessarily reduce the uncertainty in the system. On the other hand, collecting clean data (i.e., data with lower noise level) helps GP with tracking the evolution of mass and stiffness evolution more accurately. Nonetheless, even for the cases with a relatively larger noise level, GP is found to yield an accurate result. Moreover, GP also captures the uncertainty due to limited and sparse data.

Although only the Gaussian process emulator with single-degree-of-freedom dynamic systems are considered, several conceptual extensions directly followed from this work are described in detail. The approach proposed here exploits closed-form expressions based on the physics of the system and a surrogate model for the measured data. The overall framework provides a paradigm for a hybrid physics-based and data-driven digital twin approach. The physics-based approach is related to the ‘fast time’, while the data-driven approach is operational on the ‘slow time’. The separation of scientific approaches based on the inherently different time-scales will allow the integration of multi-disciplinary approaches in the future development of the digital twin technology.

Acknowledgements

RG acknowledges the financial support from The Royal Academy of Engineering through the award of a Distinguished Visiting Fellowship, grant number DVFS21819/9/5. SA acknowledges the financial support from The Engineering Physical Science Research Council (EPSRC) through a programme grant EP/R006768/1.

References

  • [1] Vinicius Souza, Robson Cruz, Walmir Silva, Sidney Lins, and Vicente Lucena. A digital twin architecture based on the industrial internet of things technologies. In 2019 IEEE International Conference on Consumer Electronics (ICCE), pages 1–2. IEEE, 2019.
  • [2] Pedro Daniel Urbina Coronado, Roby Lynn, Wafa Louhichi, Mahmoud Parto, Ethan Wescoat, and Thomas Kurfess. Part data integration in the shop floor digital twin: Mobile and cloud technologies to enable a manufacturing execution system. Journal of Manufacturing Systems, 48:25–33, 2018.
  • [3] E.J. Tuegel, A.R. Ingraffea, T.G. Eason, and S.M. Spottswood. Reengineering aircraft structural life prediction using a digital twin. International Journal of Aerospace Engineering, 2011(Art. 154768), 2011.
  • [4] Fei Tao, Fangyuan Sui, Ang Liu, Qinglin Qi, Meng Zhang, Boyang Song, Zirong Guo, Stephen C-Y Lu, and AYC Nee. Digital twin-driven product design framework. International Journal of Production Research, 57(12):3935–3953, 2019.
  • [5] F. Tao, J.F. Cheng, Q.L. Qi, M. Zhang, H. Zhang, and F.Y. Sui. Digital twin-driven product design, manufacturing and service with big data. International Journal of Advanced Manufacturing Technology, 2018(94), 2017.
  • [6] S. Haag and R. Anderl. Digital twin – proof of concept. Manufacturing Letters, 15:64–66, 2018.
  • [7] Harry Millwater, Juan Ocampo, and Nathan Crosby. Probabilistic methods for risk assessment of airframe digital twin structures. Engineering Fracture Mechanics, 221:106674, 2019.
  • [8] Mike Zhou, Jianfeng Yan, and Donghao Feng. Digital twin framework and its application to power grid online analysis. CSEE Journal of Power and Energy Systems, 5(3):391–398, 2019.
  • [9] F. Tao, H. Zhang, A. Liu, and A.Y.C. Nee. Digital twin in industry: state of the art. IEEE Transactions on Industrial Informatics, 15(4):2405–2415, 2018.
  • [10] C. Li, S. Mahadevan, Y. Ling, S. Choze, and L. Wang. Dynamic bayesian network for aircraft health monitoring digital twin. AIAA Journal, 55(3):930–941, 2017.
  • [11] R. Ganguli and S. Adhikari. The digital twin of discrete dynamic systems: Initial approaches and future challenges. Applied Mathematical Modelling, 77:1110–1128, 2020.
  • [12] D. Guivarch, E. Mermoz, Y. Marino, and M. Sartor. Creation of helicopter dynamic systems digital twin using multibody simulations. CIRP Annals, 68(1):133–136, 2019.
  • [13] S. Chakraborty and R. Chowdhury. Polynomial correlated function expansion. IGI global, 2016.
  • [14] Bruno Sudret. Global sensitivity analysis using polynomial chaos expansions. Reliability Engineering & System Safety, 93(7):964–979, 2008.
  • [15] Dongbin Xiu and George Em Karniadakis. The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM Journal on Scientific Computing, 24(2):619–644, 2002.
  • [16] Wei Zhao, Tao Tao, Enrico Zio, and Wenbin Wang.

    A novel hybrid method of parameters tuning in support vector regression for reliability prediction: particle Swarm optimization combined With analytical selection.

    IEEE Transactions on Reliability, pages 1–13, 2016.
  • [17] S. R. Gunn. Support vector machines for classification and regression, Technical report, image speech and intelligent systems research group. Technical report, University of Southampton, Southampton, U.K., 1997.
  • [18] J-M. Bourinet, F Deheeger, and M Lemaire. Assessing small failure probabilities by combined subset simulation and support vector machines. Structural Safety, 33(6):343–353, 2011.
  • [19] J E Hurtado and D A Alvarez. Neural-network-based reliability analysis: a comparative study. Computer Methods in Applied Mechanics and Engineering, 191(1-2):113–132, 2001.
  • [20] Hongzhe Dai, Hao Zhang, and Wei Wang. A multiwavelet neural network-based response surface method for structural reliability analysis. Computer-Aided Civil and Infrastructure Engineering, 30(2):151–162, 2015.
  • [21] R. Rojas. Neural network: A sysremaic introduction. Springer, Berlin, Heidelberg, 1996.
  • [22] Ilias Bilionis, Nicholas Zabaras, Bledar A. Konomi, and Guang Lin. Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification. Journal of Computational Physics, 241:212–239, 2013.
  • [23] Ilias Bilionis and Nicholas Zabaras. Multi-output local Gaussian process regression: Applications to uncertainty quantification. Journal of Computational Physics, 231(17):5718–5746, 2012.
  • [24] Souvik Chakraborty and Rajib Chowdhury. Graph-theoretic-approach-assisted Gaussian process for nonlinear stochastic dynamic analysis under generalized loading. Journal of Engineering Mechanics, 145(12):04019105, 2019.
  • [25] F Wu and W X Zhong. A modified stochastic perturbation method for stochastic hyperbolic heat conduction problems. Computer Methods in Applied Mechanics and Engineering, 305:739–758, 2016.
  • [26] F. A. DiazDelaO and S. Adhikari. Gaussian process emulators for the stochastic finite element method. International Journal of Numerical Methods in Engineering, 87(6):521–540, 2011.
  • [27] Souvik Chakraborty and Rajib Chowdhury. Hybrid framework for the estimation of rare failure event probability. Journal of Engineering Mechanics, 143(5):04017010, 2017.
  • [28] V. Dubourg. Adaptive surrogate models for reliability analysis and reliability-based design optimization. PhD thesis, Universite Blaise Pascal, Clermont-Ferrand, France, 2011.
  • [29] Somdatta Goswami, Souvik Chakraborty, Rajib Chowdhury, and Timon Rabczuk. Threshold shift method for reliability-based design optimization. Structural and Multidisciplinary Optimization, 2019.
  • [30] Sharif Rahman and Dong Wei. Reliability-based Design Optimization by a Univariate Decomposition Method. In 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference, pages AIAA 2010–9037:1 – AIAA 2010–9037:14, Reston, Virigina, 2010. American Institute of Aeronautics and Astronautics.
  • [31] Michael G Kapteyn, Karen E Willcox, and Andy B Philpott. Distributionally robust optimization for engineering design under uncertainty. International Journal for Numerical Methods in Engineering, 2019.
  • [32] F. A. DiazDelaO and S. Adhikari. Structural dynamic analysis using gaussian process emulators. Engineering Computations, 27(5):580–605, 2010.
  • [33] F. A. DiazDelaO and S. Adhikari. Bayesian assimilation of multi-fidelity finite element models. Computers and Structures, 92-93(2):206–215, 2012.
  • [34] Kevin P Murphy. Machine learning: a probabilistic perspective. MIT Press, 2012.
  • [35] Christopher KI Williams and Carl Edward Rasmussen. Gaussian processes for machine learning, volume 2(3). MIT press Cambridge, MA, 2006.
  • [36] Sandeep Sony, Shea Laventure, and Ayan Sadhu. A literature review of next-generation smart sensing technology in structural health monitoring. Structural Control and Health Monitoring, 26(3):e2321, 2019.
  • [37] Qiang Wang, Jun Huang, Quan Liu, and Zude Zhou. Dynamic strain measurement of hydraulic system pipeline using fibre bragg grating sensors. Advances in Mechanical Engineering, 8(4):1687814016645069, 2016.
  • [38] Martin Pelikan, David E Goldberg, and Erick Cantú-Paz. Boa: The bayesian optimization algorithm. In

    Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume 1

    , pages 525–532. Morgan Kaufmann Publishers Inc., 1999.
  • [39] Wei Zhang, Gaoliang Peng, Chuanhao Li, Yuanhang Chen, and Zhujun Zhang.

    A new deep learning model for fault diagnosis with good anti-noise and domain adaptation ability on raw vibration signals.

    Sensors, 17(2):425, 2017.
  • [40] Niranjan Roy and Ranjan Ganguli. Helicopter rotor blade frequency evolution with damage growth and signal processing. Journal of Sound and vibration, 283(3-5):821–851, 2005.
  • [41] Ranjan Ganguli.

    Noise and outlier removal from jet engine health signals using weighted fir median hybrid filters.

    Mechanical Systems and Signal Processing, 16(6):967–978, 2002.
  • [42] Edward Snelson and Zoubin Ghahramani. Sparse gaussian processes using pseudo-inputs. In Advances in neural information processing systems, pages 1257–1264, 2006.
  • [43] Mauricio A Álvarez and Neil D Lawrence. Computationally efficient convolved multiple output gaussian processes. Journal of Machine Learning Research, 12(May):1459–1500, 2011.
  • [44] Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014.
  • [45] Hao Xu, Haibin Chang, and Dongxiao Zhang. Dl-pde: Deep-learning based data-driven discovery of partial differential equations from discrete and noisy data. arXiv preprint arXiv:1908.04463, 2019.
  • [46] Anuj Pratap Singh, V Mani, and Ranjan Ganguli. Genetic programming metamodel for rotating beams. Computer Modeling in Engineering and Sciences, 21(2):133, 2007.
  • [47] Somdatta Goswami, Cosmin Anitescu, Souvik Chakraborty, and Timon Rabczuk. Transfer learning enhanced physics informed neural network for phase-field modeling of fracture. arXiv preprint arXiv:1907.02531, 2019.
  • [48] A. H. Nayfeh. Introduction to Perturbation Techniques. John Wiely & Sons, New York, NY, 1993.
  • [49] S. Chakraborty and N. Zabaras. Efficient data-driven reduced-order models for high-dimensional multiscale dynamical systems. Computer Physics Communications, 2018.