Fatigue failure refers to the malfunctioning of a mechanical component due to the weakening of its material under oscillatory loading below the ultimate tensile strength of the component. Fatigue is considered as one of the main reasons for mechanical failures in machine components, aerospace systems, and offshore structures jimenez2020fatigue. The failure results in localized and progressive damage and in crack growth. Fatigue damage is cumulative over time, and even though it can be assessed by non-destructive tests tykot2016using, it still can take place at unexpected timing, leading to hazardous consequences. Notable examples include the capsizing of Alexander L. Kielland semi-submersible drilling rig in March 1980 Almar-Naess1984; Moan1985, and the crash of the El-Al Flight 1862 in October 1992 wanhill1996fatigue causing the death of dozens of workers, passengers, and crew members. Hence, reliable estimation of the current state of the cumulative damage and prediction of the time to failure (TtF) are essential for SHM and failure prognosis purposes Kulkarni2008; Cho2012.
Fatigue damage under an oscillatory loading is usually estimated by either time-domain or frequency-domain approaches. The most widely-used time-domain approach is based on the rainflow counting method for decomposing the stochastic signal to its underlying amplitudes and corresponding number of cycles, followed by applying the linear cumulative Miner’s rule, also known as the Palmgren-Miner linear damage hypothesis birnbaum1968probabilistic; shimokawa1980statistical for estimating the resulting cumulative damage, as shown in Eq. (1). Frequency domain methods, such as those introduced by Dirlik dirlik1985application and Petrucci and Zuccarello petrucci2004fatigue
, use probability density functions with parameters that are tuned with respect to the rainflow counting and the Miner’s rulesutherland1999fatigue; lee2005fatigue; dowling2012mechanical. Due to multiple observatory studies, rainflow counting and Miner’s rule are considered more reliable and accurate in comparison to the frequency domain methods. However, Miner’s rule is considered as an over-simplistic failure criterion that overlooks failure mechanisms at the material level lee2005fatigue. Moreover, its accuracy deteriorates drastically as the component is exposed to broader-banded stochastic excitations, leading to substantial disagreements between theoretical and experimental results reported in the literature antoniou2018life.
Here is the estimated cumulative damage fraction. According to Miner’s rule, failure occurs when , where is the estimated failure time (FT). Variable is the number of amplitudes considered in the rainflow counting method. Variable is the stress amplitude that leads to failure after loading cycles, and is the number of loading cycles of amplitude counted by time . Parameters and are the fatigue strength, fatigue exponent and ultimate tensile stress of the component material, respectively. Variable is the mean stress of the loading signal.
Fatigue life is defined as the number of stress cycles that a specimen sustains before failure. In the current work, we focus on TtF prediction rather than fatigue life as required for SHM purposes and real-time decision making in engineering systems. One of the main challenges in the prediction of fatigue failures is the uncertainty in the material parameters at the microscopic level that leads to an inherent noise and uncertainty in the data and inconsistencies in the resulting failure times (FTs). Hence, due to the uncertainty associated with the material and mechanical properties of the component, systematic experimental noise, and statistical properties of the stochastic loading, the TtF should be treated as a probabilities quantity, which is characterized by a nominal predicted value and a corresponding level of uncertainty. Uncertainty quantification (UQ) becomes critical when it comes to real-time decision-making in systems for which sudden changes in the operating regime might lead to hazardous consequences. For example, a suggestion for emergency landing of a fighter jet due to a predicted failure in absence of a near airport. In this case, the certainty level is crucial for choosing between continuation or cessation of the mission.
In the absence of closed-form expressions that link the system’s mechanical parameters and signal properties with the TtF, and due to the abundance and stochastic nature of experimental and numerical data, data-driven methods are good candidate models for fatigue prognosis. Machine learning (ML) methods are a set of numerical models that fit a function to a given dataset by tuning a large set of model parameters in a process called learning or training. During the last decade, learning algorithms have gained popularity due to their versatility, the substantial availability of rich datasets, and high-end computational resources. Those models outperformed conservative methods in various tasks, such as image classification and speech recognition, due to their well-known interpolation and generalization capabilities. ML algorithms were proven to have a good ability to capture underlying patterns and correlations in measured data even when the underlying physical rules governing the system’s behavior are obscure or unknown. Examples include quantitative analysis of ceramic’s fracture surfaces using convolutional neural networkstsopanidis2020toward and identification of underlying equations of motion of dynamical systems from measured data schmidt2009distilling; brunton2016discovering. Therefore, applying machine learning approaches for fatigue prognosis while utilizing the availability of high-end computational abilities and rich datasets obtained from experiments tucker1975sae, numerical simulations pedersen2017multiaxial; pedersen2015fatlab, or analytical analysis, is a promising strategy for cumulative damage estimation. However, limited amount of studies focused on ML-based real-time fatigue prediction under stochastic loading vassilopoulos2007artificial; durodola2017pattern; ramachandra2019experimental; kim2016study; pujol2011neural; jimenez2019fatigue. Most of the suggested methods utilized rather simple ML-based predictive models which are applicable only in an offline fashion, i.e. in a retrospective reading of the full loading signals, and thus are not applicable for real-time purposes. Some of them are applicable only for zero-mean stochastic loadings. Moreover, those models aim to predict the fatigue life of the component or the cumulative fatigue damage and do not provide a quantitative measure of the level of uncertainty associates with the predicted value.
The main goal of the current work, is to develop a fully data-driven method for real-time fatigue failure prediction and UQ under non-zero-mean stochastic loading of unknown underlying power spectral density (PSD). In particular, we introduce a hybrid learning model that combines both a fully-connected artificial neural network (FC-ANN) and Gaussian process regression (GPR) rasmussen2003gaussian; chen2015uncertainty; perdikaris2015multi; bilionis2012multi; forrester2008engineering for enhanced FT prediction and built-in UQ capabilities. FC-ANNs have a well-known ability to identify latent relations between a set of inputs and a quantity of interest, in this case, system and forcing properties and the predicted TtF. The main advantage of employing GPR is the simultaneous estimation of the output value and the associated error/uncertainty. By using GPR, the formulated hybrid model can generate a probability density function (PDF) over the range FTs that embodies both the predicted TtF and the corresponding uncertainty associated with various sources. We also purpose a Bayesian inference-based scheme that utilizes real-time measured data for constantly updating the posterior knowledge of the predictive model. Based on the resulting posterior PDF, both the predicted FT and a confidence interval (CI) are obtained. The size of the CI is controlled by desired confidence level (CL), that can be modified on user demand and in accordance to the sensitivity of the system. This novel approach is, therefore, suitable for real-time decision-making in uncertain conditions. Moreover, it is applicable for data obtained from various sources, such as experiments, numerical simulations, and approximate analytical methods. Finally, we evaluate the performances of the proposed method for real-time fatigue prognosis for both synthetic and experimental unseen measured stochastic signals.
This paper is structured as follows. In Section 2 we discuss the input features for the learning model. In Section 3 we introduce the hybrid probabilistic predictive model. In the same section, we give an overview of fully-connected artificial neural networks and Gaussian process regression and describe the contribution of each of those modules to the overall performances of the hybrid model. The proposed methodology is demonstrated in Section 4 where it is applied on an unseen stochastic augmented data in real-time. In Section 5 we validate the hybrid model on experimental data obtained from mechanical systems that experienced fatigue failure under stochastic loadings. Finally, Section 6 provides a summary and brief discussion of the results obtained, possible engineering applications, and future research directions.
2 Input features for the learning model
Before a learning model can be designed or applied, the most informative input variables have to be chosen in advance. Those variables, also called features, have to represent the relevant physical properties of the mechanical component and statistical properties of the stochastic signal. The set of all features that correspond to a given mechanical component in the dataset and that was measured in a given time is called the temporal features vector
and serves as the input vector of the predictive model. The measured signal can be any physical measure from which the stress at the critical section of the forced component can be inferred, such as displacement, acceleration, or stress at the fatigue-prone location or in its vicinity. As mentioned in the previous section, any data from various sources can be used to train the model and test its performances, such as data obtained from experimental, numerical, and/or approximate analytical models, as long as data of the same type is used both for training the model and evaluating its results. Generally, if the component of interest is subjected to stochastic excitations of various characteristics during its lifetime, it is recommended to include signals that were generated from a set of relevant PSDs to enhance the generalization ability of the learning model. Otherwise, a single PSD can be considered to avoid the necessity of a large dataset and extensive computation time. In the current work, without loss of generality, we use stress signal at the critical section of the mechanical component for reasons of convenience and explainability of the results. The features are generated from a uniform probability distribution over the parameter ranges shown in Table1. The stochastic signals are then generated using Eq. (2)-(3), where the amplitudes of the Fourier series are generated randomly from Gaussian-like PSD. The resulting stochastic signals are assumed to be ergodic and statistically stationary. Both the normalization term in the denominator and scaling factor in Eq. (2) are used to prevent ultra low-cycle fatigue due to stresses that exceed the ultimate tensile stress of the material
. The scaling parameter is generated from a uniform distribution over the range shown in Table1.
Here is a normalized stochastic signal which is bounded between and generated from an underlying PSD . Variables is the mean of signal and is a scaling factor. Variables and are the constructing frequencies and phases of stochastic signal . All variables in Eq. (2) are randomly generated from a uniform distribution over the ranges shown in Table 1. Thus,
can be treated as a realizations of a random variable,.
As one can see in (2), the PSD does not include a magnitude factor. This is because the intensity of the stochastic signal which is usually dictated by the magnitude of the PSD, is now fully-defined by the scaling factor . The range of scaling factor was chosen to ensure that the generated signal will not exceed of the ultimate tensile stress of the material , and lead to low-cycle fatigue (LCF) polak1991cyclic or ultra low-cycle fatigue (ULCF) pereira2014ultra. The features vector includes the material coefficient that characterizes the material resistance to fatigue, i.e. the fatigue strength , fatigue exponent , and ultimate tensile strength lee2005fatigue. Next, it is well-known that the mean of the measured signal has a significant effect on the TtF. Since the mean of the signal is assumed to be unknown we estimate it with time using Eq. (4):
Here denotes expectation. The spectral information in the measured stochastic signal is embodied in its underlying power spectral density (PSD) Eq. (5) describes the variation of power content of a signal versus frequency.
is the Fourier transform of signalfor a given measurement time . Since we assume that the PSD of the signal is unknown a-priori, it is estimated in real-time using Welch’s method, which computes the periodograms for overlapping segments of the given signal and averages over the results, allowing more accurate PSD estimation with time for longer measurement durations welch1967use. The spectral information in the PSD can be reduced to a handful of representative quantities, called spectral moments, Eq. (6).
The four moments shown in Eq. (6), embody most of the spectral information in the PSD and especially the bandwidth of the stochastic loading, as one can learn from the expression of the irregularity factor of a PSD expresses as Eq. (7
). For broad-banded and narrow-banded loadings, it tends to zero and one, respectively. Any higher odd or even moments will not be adding new information regarding the frequency content of the signal. Since the PSDis evaluated with time in a fixed rate of , the spectral moments are time-dependent as well .
The bandwidth of a stochastic signal has a direct effect on the distribution of the signal local extrema around the mean value of the signal. Since fatigue failure is dictated by the amplitudes of the loading cycles, this distribution has a significant effect on the cumulative damage and TtF. This distribution is given by the following expression:
Here and are PDF and probability, respectively. Random variable is drawn from a set of all local extrema in the signal .
random variable is a drawn from the set of all local extrema of signal , i.e. . Variable is an arbitrary amplitude value and is the PDF over probable values. Function describes the probability that a local maximum is bounded between . Assuming ergodic and statistically stationary stochastic signal, for broad-banded and narrow-banded PSDs, the PDF
correspond to the Rayleigh distribution and the multi-modal Gaussian distribution, respectively. Comparison between theoretical and numerical results are shown in Fig.1.
Here is the Heaviside step function, and is an operator that maps a continuous signal to a set of its local maxima. Parameters and are the mean and standard deviation of the distribution of the local maxima.
Here is the cumulative distribution function of the standard normal distribution. The inverse relations of Eq. (10) describe the percentile amplitude, i.e. the amplitude that is larger than -percent of all cycle amplitudes/local maxima of the shifted signal . This relation can be obtained explicitly only for the narrow-banded case: . For the broad-banded case, this relation is obtained numerically .
The value of is tuned by preliminary analysis to optimize the performances of the predictive on a subset of the dataset, as shown in Appendix A. For a given value of , the percentile amplitude can be estimated using Eq. (10) and Fig. 2, or estimated directly from the measured signal in real-time, using Eq. (11) which is a discrete form of Eq. (8). The inverse relation is obtained numerically.
Finally, we obtain a vector that contains indicative information about the properties of the mechanical components and statistical properties of the stochastic signal:
The last six features are evaluated in real time using averaging Eq. (4), (6), (11). While the four moments are very distinctive for signals from multiple underlying PSDs, they become redundant when the signals in the dataset belong to a single underlying PSD. In this case, the most indicative features are the mean amplitude and the amplitudes distribution around the mean, i.e. the percentile amplitude . The former cannot be assessed from the PSD, and the latter is a nonlinear combination of the spectral moments of the underlying PSD. Therefore, and in order to reduce the required size of the dataset and enhance computational efficiency and accuracy, we can take the following reduced input vector with five features instead of nine:
This case is realistic and typical for mechanical systems that experience one excitation regime that is associated with fatigue failure, for example, components of a machine that is powered by a synchronous electric motor. For simplicity and explainability, in the current work, we will focus on the case of a single PSD and therefore use the reduced features vector. In the current work, preliminary investigation shows a strong correlation between the TtF and the characteristic amplitude that corresponds to the percentile, i.e. . For convenience, it will be denoted as . We generate a dataset that consists of statistically stationary and ergodic stochastic signals. All signals were generated from a Gaussian-like PSD using Eq. (2)-(3) for , and random frequencies and phases generated from a single uniform distribution over the ranges shown in Table 1. Thus, the reduced feature vector is considered. The three material/mechanical coefficient and the actual mean of the signal were randomly generated from a uniform distribution over the ranges shown in Table 1. The -percentile amplitude and mean amplitude are evaluated from the resulting signal using Eq. (4), (11). The FT is calculated using the rainflow cycle counting method and Miner’s rule in Eq. (1). To simulate a systematic random error, a zero-mean Gaussian noise is added to every feature in the input vector . The magnitude of the additive noise is chosen as of the feature range.
|Fatigue Strength Coefficient, A, MPa|
|Fatigue Strength Exponent, b|
|Ultimate Tensile Strength, , MPa|
|True signal mean, , MPa|
|Constructing frequencies, , Hz|
3 Hybrid probabilistic predictive model
In this section, we introduce a probabilistic model for prediction and UQ of fatigue FTs. This hybrid model combines a FC-ANN and a GPR for both enhanced prediction accuracy and simultaneous UQ. The dataset is divided into three subsets: training-set, cross-validation (CV) set, and a test-set, with a ratio of , respectively. The first is used for tuning the FC-ANN model parameters , the second is used for testing the FC-ANN prediction performances, and to train the GPR, and the third for evaluating the real-time prediction and UQ performance of the hybrid predictive model.
3.1 Fully-connected artificial neural network
Various learning algorithms have demonstrated good regression and generalization capabilities on datasets of a variety of types. Those mathematical models differ from each other in their architecture, principles of operation, and types of data for which they are applicable. One of the most widely-used and versatile learning models is the fully-connected artificial neural network. The FC-ANN consists of multiple interconnected variables called neurons which are organized in layers, from which stems its name. In virtue of the universal approximation theorem, any continuous (and not necessarily smooth) function can be approximated using a sufficiently complex or deep FC-ANN, where its complexity or depth is determined by the number of neurons and layers of the network. The input vector is fed into the first layer of the network, called the input layer. Then it propagates forward through the network until the output is obtained in the last layer, called the output layer. The size, i.e. number of neurons, in the input and output layer have to match the size of the input and output vectors, respectively. Each layer can consist of a different number of neurons, where each neuron is connected to all neurons in the adjacent layers. Each connection is characterized by a value called weight. The value of a given neuron is a weighted sum of the values of the neuron in the previous layer multiplied by their corresponding weights plus an added coefficient called bias. The resulting value is fed into a nonlinear function, called activation function, that gives the network its ability to approximate nonlinear latent functions. In regression problems, the main goal of a learning model is to approximate a latent unknown functiongiven training-set :
Where is the input vector, is the approximation obtained by the learning model, and is the vector of model parameters, i.e. the set of all weights and biases, which are optimized during the learning/training process. During this process, the model parameters are changed to minimize a loss function which is defined a-priori
as a distance metric between the ground-truth (GT) and predicted outputs of the model. Model parameters are iteratively modified using the Backpropagation algorithmhecht1992theory. After the training process is complete, output prediction is obtained by applying the resulting regression function on a new input vector .
As mentioned in the previous section, in the current work we aim not only to find a predicted FT but also a CI concerning a desired CL . The limiting cases in which the CL equals to zero and unity correspond to absolute uncertainty and absence of CI, and perfect certainty with infinitely wide CI, respectively. We use the output of the trained FC-ANN obtained according to Eq. (15) as a reference for the second module of the suggested hybrid model that will be described in the next section. The reference prediction is obtained by the following expression:
Here, is the vector of model parameters obtained after the training process, and is the sets of feature vectors and GT FTs from the train-set. The vector of model parameters is tuned with respect to the following loss function:
Here is the regularization coefficient. The regularization term in the loss function is added to avoid over-fitting. The architecture of the FC-ANN is designed iteratively when the complexity of the network is gradually increased until sufficient accuracy is achieved. The main goal of this stage is to find the “simplest” network that yields good predictions to avoid over-fitting and time-consuming training process. Thus, we obtained a three-layers FC-ANN, with five neurons in the input layer (the size of the reduced input vector ), twelve neurons in the hidden layer, and a single neuron in the output layer. Training is performed using Back-propagation, and mini-batch optimization with the Adam method kingma2014adam with an adaptive learning rate with an initial learning rate . The model weights are initialized using the Xavier method glorot2010understanding. To avoid over-fitting, the loss of the model is evaluated on both the train set and a distinct validation set. The training process is completed when the training error had converged or when the maximum number of epochs is reached. The prediction performances of the network are evaluated on the CV-set and compared to the GT results in Fig. 3.
Here, each prediction is described using a single point in the plane. As one can see, the overall performance of the network is described by the distribution of the resulting points around the line of perfect prediction . The prediction points are tightly distributed around the line of perfect prediction, indicating that the network successfully captures the underlying relation between the FT and the features, yielding a good surrogate predictive model. However, as expected, the distribution around the line is not negligible and stems from multiple sources of stochasticity and uncertainty in measurement methodology, values of system parameters, etc. Hence, a probabilistic module than can quantify uncertainty associated with the FC-ANN FT predictions is essential for our model.
The variance of the results obtained for each value of embodies the level of uncertainty associated with the ANN prediction. Moreover, the variance of the distribution of the results varies with . Hence, we treat the prediction points as representations of the features vector in a reduced-order embedding space . We aim to fit a multivariate PDF above the embedding space that maps each prediction to a PDF that simultaneously embodies the most likely FT value and corresponding prediction uncertainty. The level of uncertainty will be dictated by the amount of similar data in the training-set, i.e. near points in the embedding space. For highly exploited regions of the certainty level is high, and we expect to get lower variances, and vice versa. In the next section, we introduce a GPR-based approach for UQ of the predictive results obtained by the FC-ANN for a real-time measured features vector .
3.2 Gaussian process regression
Gaussian process regressions (GPRs) rasmussen2003gaussian are non-parametric kernel-based probabilistic models that take as input training dataset of pairs of input variables and noisy scalar value , and constructs a surrogate model that generalizes well for unseen data using a multivariate PDF. The PDF is used for simultaneous prediction of the output value and the corresponding uncertainty. The noise in the output variable models uncertainty due to various factors such as a systematic error in measuring the system properties and stochastic measured signal. We define a latent function that maps ANN predictions to the corresponding GT TFs , i.e Here, we assume that the latent function is inaccessible, and can be evaluated only using noisy observations . Moreover, we assume that the noise is additive zero-mean, and normally distributed.
Here, is the variance of the noise. A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint Gaussian distribution. The main idea behind GPR is to utilize a GP to represent the latent function . This is done in a Bayesian framework, in which the GP serves prior over function-space which in turn is conditioned on the measured data to yield a posterior distribution. The latter enables us to make an inference on the values of the latent function that correspond to unseen input variables using a finite set of training data .
A GP is completely specified by its mean function and covariance function, and , respectively.
Here denotes expectation. The mean functions of the prior GP embodied a prior knowledge about the latent function . Usually, it is unknown and therefore set to zero. The properties of the GP are governed by the covariance function or kernel, which is symmetric and positive semi-definite by definition. The covariance function
quantifies the covariance between any pair of points in the dataset with respect to a given covariance function, also called kernel. The most commonly used is the squared exponential kernel, also known as the Radial Basis Function (RBF) kernel, or Gaussian kernel.
Here parameter is a characteristic length scale that repents the covariance between a data point to its neighbors and thus is dictating the smoothness of the candidate functions generated by the GP. For example, large values of , enforce large off-diagonal values in the covariance matrix, which corresponds to smooth candidate function, and vice versa. In general, extrapolation using GPR is reliable only approximately units away from the training data. Parameter determines the variance of the candidate functions away from the mean . The set of parameters , also called hyper-parameters, are are determined to best fit the data by optimizing over log-likelihood loss function according to the maximum a-posteriori (MAP) principle:
Here, is the symmetric covariance matrix whose entry is the covariance between the variable in the group denoted by the first subscript and the variable in the group denoted by the second subscript, calculated using covariance function and corresponding hyper-parameters. Vector is a vector of the training observations, and . The vector of optimal hyper-parameters is obtained by optimization the log-likelihood loss function over the hyper parameter space :
Differentiating Eq. (22) with respect to the hyper-parameters yields:
The computational complexity of this calculation is dictated by the matrix inverse, which is . After the optimal hyper-parameters over the training-data are obtained, the GPR is fully-determined and inference can be easily made by computing the posterior PDF obtained after conditioning on the training-data .
Here, is the estimated observations vector that corresponds to the vector of test inputs , for which we want to estimate the latent function . and are the mean and covariance function of the posterior GP, obtained after conditioning the prior of the training data .
One of the main advantages of GPs, is that conditioning the resulting multivariate-PDF on a given input variable yields a Gaussian posterior univariate PDF on the output variable. Hence, the posterior GP is now used to generate a Gaussian PDF over the space of FTs for a given input value that corresponds to an input features vector , as shown in Eq. (26)
Here, and are the mean and variance of the resulting Gaussian PDF , respectively. The standard deviation of the PDF is defined as follows: . Parameters vector includes the parameters of both modules of the hybrid model, i.e. . The PDF is calculated in real-time for each measured features vector .
The last information to be fused into the PDF over FTs is the measurement instance , by eliminating the probability for failure before this instance by introducing a posterior PDF for which . This will allow enhanced confidence for relevant FTs and shrinking the CI associated with the failure prediction. Thus, we introduce the following Heaviside likelihood function and calculate the posterior PDF over the FTs using the Bayes rule:
Now, FT prediction and the estimation of the corresponding CI are intuitive. According to the maximum a-posteriori probability (MAP) principle, the predicted FT is estimated as the mode of the resulting posterior distribution. The boundaries of the CI, denoted by , correspond to cumulative distribution of with respect to the posterior PDF as shown in Eq. (28), and by that enforce the desired CL.
The variance of the posterior PDF is a measure of the uncertainty associated with the prediction obtained. More specifically, since the posterior PDF is Gaussian or nearly-Gaussian, the boundaries of the CI can now be obtained analytically for a desired CL as follows:
As one can see in Eq. (29), the penetration of the measurement time into the CI of the prior PDF, leads to asymmetricity in the CI with respect to the mode of the posterior PDF. Moreover, as the measurement time advances and more knowledge is accumulated regarding the probable FT (more precisely when the fatigue failure does not occur), the CI shrinks as the level of uncertainty decreases. This effect is demonstrated in Fig. 5 for a simplified case of constant predicted FT .
As one can see in Fig. 5, due to constant fusion between the prior PDF and measured observations, the prediction certainty level increases with time as the size of the CI decreases. Generally, the predicted FT can oscillate erratically due to the stochastic nature of the signal and its estimated statistical features in the features vector . In this case, the size of the CI oscillates as well and is dramatically influences by the exploitation levels of the resulting FC-ANN prediction , i.e. high standard deviation are obtained in unexploited regions of , and vice versa. However, as the measurement time approaches the predicted FT , the Bayesian inference regarding the FT (Eq. (27)) promises an extensive increase in the degree of certainty of the resulting prediction. In summary, the suggested hybrid model maps between a real-time measured features vector to a PDF that is used for both FT prediction and UQ with respect to a given CL and in a measurement frequency of request. In the next section, the real-time performances of the hybrid model are demonstrated for an unseen stochastic signal.
4 Real-time fatigue prediction and uncertainty quantification
After both modules were trained on the train-set (FC-ANN) and cross-validation-set (GPR), the prediction and UQ performances of the hybrid predictive model can be demonstrated on an unseen stochastic signal taken from the test set. To simulate a real-time framework, the signal is sampled at a constant rate of . In each sampling, the measured signal is used to accumulate the temporal features vector according to Eq. (4), (11). We assume that the material fatigue properties are known with finite accuracy. The measured stochastic signals can be chosen as any physical observable that is strongly correlated with the stress in the critical section on the mechanical component which is the most prone to fatigue failure. For simplicity and explainability, in the current illustration, all signals are considered as stress signals measured directly from the critical section. All signals in the dataset are generated from the same PSD. Thus, the modified features vector is used. Otherwise, the moments of the PSD should have been calculated using Welch’s method and included in the extended features vector. Then, the pre-trained FC-ANN is used for generating a prior prediction of the FT according to Eq. (15). The GPR is used for calculating the posterior PDF according to Eq. (17)-(28). Finally, the predicted FT and CI boundaries are calculated using the posterior for a desire CL . The results are shown in Fig. 7 for four representative time instances. As one can see, approximately after half of the signal was measured, the GT FT falls within the CI. As measurement time approaches the GT FT, the CI becomes narrower - indicating a higher level of certainty achieved during the learning process due to constant evolution of the posterior PDF.
5 Experimental validation
The hybrid predictive model is validated using experimental data obtained by the Society of Automotive Engineers (SAE) tucker1975sae. This data was collected is to serve as a benchmark for fatigue prediction methods for ground vehicles. The examples in the dataset include keyhole-shaped steel specimens made of US Steel’s Man-Ten alloys and Bethlehem’s RQC-100. The properties of both materials are shown in Table 2. The specimens are of identical geometry and are hot-rolled in parallel to the direction of crack growth (perpendicular to the applied load).
|Modulus of Elasticity, E, GPa||203||203|
|Yield Strength, , MPa||327||586|
|Ultimate Tensile Strength, , MPa||565||820|
|Fatigue Strength Coefficient, A, MPa||917||1160|
|Fatigue Strength Exponent, b||-0.095||-0.075|
5.1 Finite element analysis for stress concentration factor estimation
Due to the existence of the notch, stress concentration effects become significant and magnify the lead to substantial magnification of the stresses in the vicinity of the notch. This effect is considerd by stress concentration factor which is defined as the relation between the maximal actual stress measured near the notch and the nominal stress , as shown in Eq. (30):
There is a substantial variance among estimation of the stress concentration factor in keyhole specimens reported in the literature, that range between to neal1978three; pilkey2020peterson. Moreover, most of those studies use outdated methods for evaluating factor . Hence, we conduct a dedicated 3D finite element analysis (FEA) for evaluating the resulting stress concentration factor of the specimen. The FEA was performed using the commercial package SolidWorks Simulation 2020. As the keyhole specimen is symmetric in the longitudinal direction the analysis was performed for half of the specimen only with symmetry boundary condition applied accordingly. Equal radial loads were applied on each of the top halves of the grip holes to emulate the load pattern of the clevis pins. Loads of were distributed sinusoidally in the circumferential direction. Moreover, zero in-plane translation restraints were applied on the faces of the grip holes. The mesh of the finite element model was generated using parabolic tetrahedral elements with a global element length of , as shown in Fig. 9. A refined mesh was generated in the vicinity of the notch with a length of and a maximum aspect ratio of . This resulted in a total of nodes and elements. The circumferential stress distribution in the notch vicinity is presented in Fig. 10.
Where and are external force and moment applied on the specimen, respectively. Parameters and are the area and second moment of area of the critical section, respectively. Variable is the coordinate along the axis of symmetry. According to Eq. 31, for pure tension load of , resulting nominal stress of is obtained. Employing the relations shown in Eq. (31)-(30), the resulting stress concentration factor obtained is . This value will be used for further calculation of the equivalent stresses according to Eq. (30), i.e. .
5.2 Validation of the hybrid predictive model using experimental data
For all notched specimen tests, loads were applied to the specimen through a close tolerance monoball fixture which allowed both tensile and compressive loading. The loading signals are given as series of peaks and valleys, disregarding the frequency content of the signals (nominally between ). For scaling purposes, all signals are normalized to fit the range of . The dataset includes three types of loading signals which differ in their statistical properties: a) A load history obtained from the bending moment on a vehicle suspension component, driven over an accelerated durability course, b) a vibration obtained on a mounting bracket being excited by vehicle operation over a rough road, c) A load history with a drastic change of a mean load obtained from transmission torque measured on a tractor engaged in front-end loader work. In each loading regime, the actual loading signals are obtained by a linear scaling of the normalized signals with respect to three load levels: , , and . The three variable amplitude histories are linearly scales to three levels with three replications of each level and material to produce a variety of FTs. For each experiment, three characteristic instances were recorded: crack initiation, crack propagation, and total life until failure which is the sum of the first two. The SAE dataset includes samples in total. In the current analysis, we consider the time histories that correspond to the second loading type, i.e. samples in total (Fig. 11).
To simulate real-time sensing, the peaks and valleys sequence is interpolated using cubic spline interpolation in the frequency of where . As mentioned in previous sections, training of hybrid predictive model, approximately samples are required. Hence, additional stochastic loading signals are augmented and generated from the underlying PSD associated with the second loading type. The latter is obtained using Welch’s method for PSD estimation from a time-history, as explained in Section 2. Signals are generated according to Eq. (2) and components and frequency range of . All signals are multiplied by the stress concentration factor to yield the equivalent stress in the vicinity of the notch of the keyhole specimen. FTs of the augmented stochastic loading signals are estimated using rainflow counting and Miner’s rule. Both the augmented and the authentic dataset are being split into training, CV, and test sets with a ratio of , respectively. The first and the second sets are combined and shuffled and are uses to train the FC-ANN and GPR modules. The third set is used to evaluate the performances of the trained hybrid model on the augmented and authentic data separately. To evaluate the model’s real-time prediction performances, we introduce a success criterion that simultaneously describes prediction accuracy, earliness, and stability. A Prediction is considered as accurate if GT FT falls within the CI of the posterior PDF at percent of the time until failure for percent of the time.
Here, and are user-defined parameters that control the level of early prognosis and prediction stability, respectively. In the current work, we take the early prediction and stability parameters as and , respectively. The CL is chosen as The results are shown in Table 3. As one can see, the hybrid model captured approximately of FTs. In the other cases, it tends to produce a conservative estimation, i.e. , rather than an underestimation, i.e. . However, this is not a built-in feature of the suggested model, but a statistical property that derives from the PSD of the signal, and the occurrence frequency of extreme amplitudes that cause drastic changes in the measured statistical featured of the signal. The results improved significantly when favoring accuracy over early prediction, i.e. as the early prediction parameter is closer to unity, and allowing the features vector to “stabilize” on its steady-state values. The inaccurate predictions obtained on the authentic dataset cannot be considered representative due to the small number of samples in this dataset (six). Better performances can be obtained by optimizing the architecture of the FC-ANN modules and increasing the size of the dataset to samples and above.
|Conservative pred.||Non-conservative pred.|
|Total111From both authentic and augmented datasets test-set|
As one can see in Table 3, the FT predictions obtained by the hybrid model are promising and in good agreement with the experimental data for a variety of material parameters, loading intensities, and FTs. For of examples in the total dataset, the FTs were captured withing the CI in time duration of before failure occurrence and with prediction stability of . Similar results were obtained for the authentic experimental dataset. When the model obtained inaccurate predictions, most of them were conservative (), i.e. the GT FT was larger the upper boundary of the CI, . Compensation on less early predictions, i.e. larger values of , allows the estimated statistical properties of the signal and to reach steady-state and to base the posterior PDF on richer data. Thus, the model predictions improve for larger values of parameter .
In contradiction to most of the existing methods, this method allows real-time failure prediction thanks to a real-time features estimation methodology. The usage of confidence interval with an adaptive size, i.e. confidence level, allows obtaining upper and lower boundaries for the predicted FT, which in turn, serves a major role in real-time decision making in terms of safe operation, maintenance, and structural health monitoring. Future research will deal with data-driven probabilistic fatigue prediction for non-stationary stochastic signals.
6 Concluding remarks
A hybrid data-driven method for real-time prediction and uncertainty quantification (UQ) of fatigue failure under stochastic loading was formulated. The developed approach combines two powerful learning models, fully-connected artificial neural networks (FC-ANNs) and Gaussian Process Regression (GPR), to answer an essential engineering need of real-time prediction of time intervals
that are prone to fatigue failures with a given certainty level (CL). The proven ability of the FC-ANN to represent high-dimensional data in a reduced-order embedding space is utilized by the GPR for enhanced prediction and simultaneous UQ. After the prior training process of both predictive modules, the predictions are obtained at an extensive speed, making the methodology suitable for real-time purposes. Moreover, a full methodology was suggested for real-time evaluations of the input features calculated based on the measured signal, both for the case in which the signals are generated from a single PSD or multiple PSDs. The suggested method is applicable for stochastic signals of various physical properties, such as stress and strain, that are correlated to the stress in the critical section of the studied mechanical component. The proven prediction and generalization abilities of the FC-ANN to represent the data in a latent embedding space are used by the GPR for simultaneous prediction and uncertain quantification. Moreover, a full methodology was suggested for real-time evaluations of the input features calculated based on the measured signal, both for the case in which the signals are generated from a single PSD or multiple PSDs. The method is applicable for measured data obtained by experiments, numerical analysis, time-domain, or frequency-domain methods. We have demonstrated the real-time performances of the model for an unseen signal, and a posterior updating rate of. The FT was accurately predicted in a safety interval of of the signal duration. The suggested hybrid method paves the way towards the development of predictive systems to be used for generating safety and operational instructions in real-time for failure prevention in mechanical systems under stochastic loadings, structural health monitoring (SHM) purposes, and protection from irreversible damage to the environment and human lives.
The author is grateful to Dr. P. Kravets and Dr. S.H. Rudy for enjoyable discussions, and Prof. J.F. Durodola and Prof. M. Groper for sharing valuable experimental and numerical data.
The author has been supported by the Fulbright Program, the Israel Scholarship Education Foundation (ISEF), Jean De Gunzburg International Fellowship, the Israel Academy of Sciences and Humanities, the Yitzhak Shamir Postdoctoral Scholarship of the Israeli Ministry of Science and Technology, and the PMRI – Peter Munk Research Institute - Technion.