Sparse representation for damage identification of structural systems

06/06/2020 ∙ by Zhao Chen, et al. ∙ Northeastern University 0

Identifying damage of structural systems is typically characterized as an inverse problem which might be ill-conditioned due to aleatory and epistemic uncertainties induced by measurement noise and modeling error. Sparse representation can be used to perform inverse analysis for the case of sparse damage. In this paper, we propose a novel two-stage sensitivity analysis-based framework for both model updating and sparse damage identification. Specifically, an ℓ_2 Bayesian learning method is firstly developed for updating the intact model and uncertainty quantification so as to set forward a baseline for damage detection. A sparse representation pipeline built on a quasi-ℓ_0 method, e.g., Sequential Threshold Least Squares (STLS) regression, is then presented for damage localization and quantification. Additionally, Bayesian optimization together with cross validation is developed to heuristically learn hyperparameters from data, which saves the computational cost of hyperparameter tuning and produces more reliable identification result. The proposed framework is verified by three examples, including a 10-story shear-type building, a complex truss structure, and a shake table test of an eight-story steel frame. Results show that the proposed approach is capable of both localizing and quantifying structural damage with high accuracy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

page 7

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Model updating is an effective way to address the discrepancy between an ideal finite element model and the actual system based on sensing data. Such a discrepancy might be attributed to measurement noise and/or modeling error. Updated models can then be used to predict structural response, identify damage, and perform reliability analysis, among others. The core idea of model updating is to find the representative variation of structural properties that can account for the location and the extent of discrepancies Mottershead . Among many classical methods (e.g. least squares-based method Xu , heuristic algorithm Sun2013 , filtering techniques Chatzi , etc.), sensitivity analysis is one of the most mature methods for modeling updating Mottershead ; Marwala . However, a distinctive challenge is that extracting structural parameters from measurement data, such as modal frequencies and shapes, is typically an ill-posed regression problem in the context of sensitivity analysis, due to (1) measurement incompleteness and noise, and (2) inevitable modeling error.

To tackle this issue, regularization has been applied, among which the Tikhonov regularization Tikhonov

and the truncated singular value decomposition

Hansen are very popular. Besides the deterministic methods, Bayesian learning has been adopted to quantify uncertainties associated with model updating Beck . Lots of theoretical and experimental contributions have advanced the methodology for model updating and damage identification. To name a few: a two-stage modal-based Bayesian model updating strategy with ambient measurements Zhang ; a hierarchical Bayesian modeling that accounts for time-variability of structural systems Behmanesh

; Bayesian inference for simultaneous identification of structural parameters and loads

Sun2015 ; damage detection of shear frames with scarce and noisy measurements Sohn ; multiresolution Bayesian regression for model updating which can flexibly zoom into significant regions Yuen2018 ; recursive Bayesian updating that employs frequency response function Mao ; bolted-connection damage detection using incomplete and noisy modal data Yin2017 ; Bayesian damage prognosis for remaining useful life of bearings Mao2014 ; identification of the Phase II ASCE-IASC benchmark frame Ching ; and progressive damage identification of a 7-story building slice Simoen . Comprehensive literature reviews on this topic are well elaborated in Au ; Yuen ; Simoen2015 ; Huang2019 .

While there have been significant developments for solving the inverse problem for model updating, most of existing approaches are based on regularization, which tend to “over-smooth” structural parameter variations. While these methods are useful for largely populated damages, for example, widespread surface corrosion, they tend to lead to biased damage identification in scenarios where damage has sparsity characteristics (e.g., damage occurs at distinct locations of the structure with features from a mathematical point of view). Therefore, sparsity-promoting regularization is required for such a situation. Recently, Huang et al. Huang2015 ; Huang ; Huang2017b developed a group of sparse damage identification methods based on Bayesian learning that imposes spatially sparse constraints on ill-proposed model updating problems with incomplete modal data. Nevertheless, the -oriented sparse Bayesian learning is still suffered from false positives in identification, since the -norm is a relaxation of sparsity and serves as an approximation of regularization unless a certain strong condition is satisfied Candes . Despite of its ideal characteristics for sparse representation, the non-differentiability of the norm makes its optimization a non-deterministic polynomial-time (NP) hard problem, possessing extremely computational complexity and preventing its wide application. Recently, the Sequential Threshold Least Squares (STLS) regression proposed by Rudy et al. Rudy has shown efficient and superior sparsity representation in the context of regularization based on selective hard-thresholding. Though this approach shows good promise for solving sparse damage identification problems, a critical issue lies in how to choose the thresholding criterion. It is noted that this criterion is very problem-dependent and an inappropriate selection will essentially lead to biased identification. We will address this fundamental issue by introducing an automatic thresholding mechanism.

In this paper, we propose a two-stage sensitivity analysis-based framework for model updating and sparse damage identification, cohesively combining and regularization. In the model updating stage, an Bayesian learning method is firstly developed for model updating and quantifying associated model parameter uncertainties. The updated model will serve as a baseline for damage detection. In the sparse damage identification stage, a quasi- method based on STLS regression is utilized along with cross validation and Bayesian hyperparameter optimization to enable a data-driven sparse representation of damage.

The rest organization of this paper is as follows. The second Section presents the methodology of the model updating ( Bayesian learning) and the sparse damage identification (STLS regression). The third Section shows two numerical and one experimental examples to verify the performance of the proposed approach. The fourth Section summarizes the conclusion of this paper.

2 Methodology

This section presents the framework of sensitivity analysis-based model updating and sparse damage identification. The first subsection introduces the concept of sensitivity analysis which formulates the problem of nonlinear model updating in a recursive process. The next subsection introduces

Bayesian learning for solving the resulting sensitivity equation. The final subsection elaborates STLS regression for damage detection and its enhancement by cross validation and Bayesian hyperparameter estimation.

2.1 Sensitivity Analysis

Sensitivity analysis is to tell the influence of latent factors on the observable behavior of a system. To begin with, we parameterize the structural model with respect to the stiffness parameters at the local element level, namely,

(1)

where is the stiffness matrix of the initial model, is the stiffness matrix of the updated model, is the number of structural elements,

is the total number of degrees-of-freedom (DOFs), and

is the stiffness matrix of th substructure, which is derived from the secondary diagonal of . Lastly, is the variation coefficient of . Combining all

, we have a variable vector for all structural elements

.

The following sensitivity equation, which connects the modal residue (of frequencies and shapes) and the structural parameters, is derived as a basis for model updating:

(2)

Since this equation is actually a linear truncation of a Taylor series, we use iterative increments (indicated by ) to compensate for truncated nonlinearity. In this equation,

merges the normalized residue of modal eigenvalues, e.g.,

, and the residue of the mass-normalized modal shapes, e.g., . is the number of modes and is the number of sensors. and are normalization coefficients. The subscript “Meas” stands for measurements and “FEM” denotes finite element model. is the Jacobian matrix of the undamped FEM eigenvalues/shapes with respect to parameter Fox . Note that the sensitivity analysis of damped modal quantities can add more value to practical applications, however, here we aim to present the prototypical framework herein and leave that for the future work. The last term is the parameter increment. Hence, the identified stiffness parameters are the sum of the iterative increments, namely, .

2.2 Model Updating: Bayesian Learning

To update the parameters, we formulate the the ill-posed sensitivity equation (see Eq. (2)) in the context of hierarchical Bayesian inference. Essentially, solving the sensitivity equation is equivalent to finding the maximum a posteriori (MAP) estimate Sun2015b , where the posterior probability density function (PDF) of the unknown parameters, viz., , can be described as Chen

(3)

where

is the likelihood function which can be represented by the multivariate normal distribution

;

is the variance of the modeling error;

is the prior distribution of that follows the multivariate normal distribution with the variance ; and are the hyper-prior distributions of and

following inverse Gamma distributions, namely,

and respectively. Here, are user-defined hyperparameters that can be simply determined by magnitudes of other variables.

To maximize the joint posterior PDF shown in Eq. (3), we firstly derive its closed form. Then, we take partial derivatives of the joint posterior with respect to , and , respectively and set these derivatives to zeros, resulting in the following set of equations Sun2015b ; Yan2017 ; Yan2019 ; Chen

(4a)
(4b)
(4c)

where and is the number of independent data sets (observations). By sequentially updating and , we can get their optimum after a few iterations. This essentially forms an automatic Bayesian learning process Sun2015b ; Sun2015 .

Note that the MAP only provides a deterministic prediction of and . A probabilistic estimation for quantifying how reliable the MAP estimate is can be achieved by approximating the posterior

with a multivariate Gaussian distribution.

We compute the Hessian matrix of the negative logarithmic form of the posterior PDF and derive the covariance matrix Yuen ; Sun2015c :

(5)

where is the Hessian matrix of ; is the loss function; and denotes the covariance matrix of the posterior

. Strictly speaking, if we use conjugate prior, the joint posterior

will be normal-inverse-Wishart distribution and its marginal posterior follows a student’s t distributionBishop

. Besides, as the Central Limit Theorem

Rosenblatt states in most cases, as the number of independent observations goes up the aggregated distribution will approach a Gaussian distribution. Therefore, modeling the posterior distribution by a Gaussian distribution can be justified.

Now that we have a joint posterior distribution of the stiffness parameters, we aim to further obtain the marginal posterior distribution of each parameter, e.g., . We design a Monte Carlo sampling strategy to obtain based on the procedure as follows: firstly, we accumulate all increments to obtain the MAP estimate ; secondly, we compute the aggregated variance ; then, we draw a vast number of samples for each parameter using the joint normal distribution ; lastly, we fit a Gaussian distribution to the samples and obtain the posterior distribution of each parameter, e.g., . Given the quantified mean and variance of the stiffness parameters, we can evaluate the uncertainty.

2.3 Sparse Damage Identification: Sequential Threshold Least Squares Regression

2.3.1 Sequential Threshold Least Squares Regression.

Structural damage often occurs locally at a few locations posing a sparse distribution nature. In the context of stiffness parameter variation identification, sparsity is present in such cases (i.e., is sparse) where sparse representation should be applied. Therefore, the sensitivity analysis as shown in Eq. (2) can be cast as an ideal optimization problem, expressed as

(6)

where the norm counts the non-zero values in . Despite its ideal characteristics for sparse regression, its non-differentiablity makes its optimization an NP-hard problem, possessing extremely computational complexity. The Sequential Threshold Least Squares (STLS) regression Zhang2019 ; Rudy provides an elegant alternative to sparsely solve the sensitivity equation in Eq. (6) in a quasi- manner. The core concept of STLS is to turn the sparse representation into a series of least-squares regression processes with hard thresholds. The STLS regression approach has been proven more effective and more accurate than classical method in regard to promoting sparsity Zhang2019 . Herein, we turn Eq. (6) to the STLS regression, written as:

(7)

where

(8)

Note that the STLS algorithm should be run in each sensitivity iteration . The general workflow of Eq. (7) is that, in the th STLS iteration, is obtained by the least squares solution of , meanwhile also needs to be a support set of from the last iteration. Note that only includes those entries either greater than or equal to an adaptive threshold and the rest entries are excluded and set to zeros.

1.1

Algorithm 1 Sequential Threshold Least Squares (STLS) regression:
1:Input: Sensitivity residue , Jacobian matrix and sparsity threshold .
2:Initialize : Estimate from LASSO using cross validation , where is optimally selected from a geometric series.
3:Initialize loss : Compute the initial best loss .
4:while  do
5:    Threshold entries of by : and .
6:    Enforce zeros to smaller entries . Update remaining non-zero entries (a.k.a ) by least squares regression.
7:    Combine and to form .
8:    Compute the new loss .
9:    if  and  then
10:       
11:       
12:    else
13:       Break the while loop.
14:    end if
15:    .
16:end while
17:Output: .

1.1

Algorithm 2 Bayesian hyperparameter Optimization:
1:Input: Sensitivity residue , Jacobian matrix and the low bound and the high bound for sparsity threshold and . In this case, and .
2: Initialize : Evaluate for four randomly sampled points () from . Determine that has the smallest .
3: Initialize the Gaussian Process (GP) model:

Model the loss function

as a GP model with mean , the ARD Matérn 5/2 covariance kernel Rasmussen and Gaussian noise with variance
.
4:while  do
5:     Update the GP model by computing the posterior distribution .
6:     Find the next by maximizing the acquisition function .
7:     Evaluate for the new sampling point
8:    if   then
9:       
10:    else
11:        Add into
12:    end if
13:    .
14:end while
15:Output: and .

2.3.2 Bayesian Hyperparameter Optimization.

Obviously, the threshold is critical for controlling the sparsity and attaining regression accuracy. The selection of is dependent on the linear system in Eq. (6). However, and change in every sensitivity iteration . Thus, manually setting a constant may not be a wise choice. Therefore, we propose to leverage Bayesian hyperparameter optimization Snoek to heuristically determine in each sensitivity iteration. First of all, we define a loss function for depicting the balance role of :

(9)

where and computes the condition number, while the coefficient is to make the two summation terms at about the same magnitude (e.g., ). We can see that and have an implicit relationship. To proxy this relationship, we build a Gaussian Process (GP) model that is updated by past evaluations of the loss function in Eq. (9).

We denote the initial GP model by with mean , the ARD Matérn 5/2 covariance kernel Rasmussen and Gaussian noise with variance ( is a hyperparameter of the kernel). Then, we randomly choose a set of initial values from the hyperparameter’s bounds and update the GP model by calculating the posterior distribution . By applying an acquisition function to the GP model, which in our case is the expected improvement (see Eq. (10)), we can locate a new value for . After sequentially updating the GP model, maximizing the acquisition function and evaluating the loss function with a new for many iterations, we can approach the optimal value of . Note that the expected improvement is defined as

(10)

where is the current best option. It has been shown that Bayesian optimization can lead to faster convergence than grid or random search Snoek , due to the ability to prioritize the search area.

2.3.3 Least Absolute Shrinkage and Selection Operator.

It is worth mentioning that a desirable performance of STLS requires a good initialization of , which can be estimated by Least Absolute Shrinkage and Selection Operator (LASSO) Tibshirani as

(11)

where is a pivotal regularization coefficient controlling how many entries are close to to zeros. To choose a suitable , we guess a geometric series of 100 ’s, with the last one (the largest one) as large as possible such that it refrains from becoming all zeros and the first one (the smallest one) having a ratio with respect to the last one. Then, we select the whose corresponding loss in Eq. (11) ranked the second smallest among all the 100 losses. The reason for not choosing with the smallest loss is to alleviate overfitting to noise. Furthermore, we cross-validate the LASSO model by randomly splitting and into portions, using portions as training data when the regression loss is computed on the last portion, repeating this process times, and averaging all the models to get an average estimation of .

More details of how to implement the proposed STLS algorithm are presented in the pseudo-code (Algorithm 1 and Algorithm 2

). An important note is that the entire algorithm is designed to run in each sensitivity iteration. Even though STLS is a point-estimate method by its definition, we can still build confidence intervals by doing more tests and computing the statistical variance.

3 Numerical and Experimental Examples

In this section, we presents three case studies to validate our proposed framework. The first two are numerical simulations of a 10-story shear-type model and a 31-member truss structure. The third one is a shake-table test of an 8-DOF steel frame. The proposed computational framework was coded in MATLAB Moore on a standard workstation with 10 Intel i9 CPU cores and 64GB memory.

3.1 Numerical Example: A 10-Story Shear-Type Model

This proof-of-concept example is a 10-story shear-type model (see Figure 1), whose nominal inter-layer stiffness and mass are 176.729 MN/m and 100 ton Yuen2006 ; Chen . The first two damping ratios are 2%. The element stiffness parameters in the actual system are assumed to fluctuate between –20% and 20%. For the purpose of damage identification, we assume that 28% and 33% stiffness reduction are present in the first and the third stories respectively (count from bottom to top).

Figure 1:

A 10-story shear type model. Element stiffness is 176.729 MN/m and node mass is 100 ton. Accelerations from the odd nodes are known. 5 different white noise excites structural vibration.

Five accelerometers are installed on the odd floors and a duration of 10-min response under white-noise ground motion excitation is recorded for both undamaged and damaged cases. Five monitoring tests are conducted and all measurements are polluted by noise, whose Root Mean Square (RMS) is 10% of that of the clean signal.

The first three modal frequencies and shapes are extracted by the Observer/Kalman filter IDentidication (OKID)

Juang followed by the Eigensystem Realization Algorithm (ERA) Juang1985 , namely, OKID/ERA, which computes the Markov parameters of an observer (e.g. the Kalman filter) and further identifies the state-space model and the modal parameters used for model updating and damage identification Lus .

Figure 2: Stiffness variation coefficients in model updating and sparse damage identification under 10% noise for the 10-DOF shear-type structure: (a) the prediction is almost in accordance with the ground truth with error bars showing 95% confidence interval; (b) even though there are very minor false positives(the highest is less than 10%), the two most important components are very clear.
Figure 3: Comparison between the first three modes after model updating/damage identification for the 10-DOF shear-type structure: (ac) the upper panel displays the predicted and the true mode frequencies and shapes in the intact model, where the frequency has an impressive average error of 0.08%; (de) the lower panel illustrates the mode frequencies and shapes in the damaged model, where the frequency’s average divergence is still satisfyingly small at 0.43%.
Figure 4: Effects of Bayesian optimization in the 10DOF shear-type example: (a) Damage identification results when using Bayesian optimization to determine ,or manually setting 0.1 or 0.2 throughout the sensitivity analysis; (b) values determined by Bayesian optimization in each sensitivity iteration.

The updated stiffness variation parameters of the intact model, e.g., , are shown in Figure 2(a). It can be seen that the Bayesian learning approach can produce accurate identification of the stiffness variation parameters that has a high correlation with the ground truth. The 95% confidence intervals (computed from variance of the marginal posterior distributions) can well cover the ground truth. While for damage identification (see Figure 2(b)), the STLS method successfully identifies the stiffness reduction occurred in the first and the third stories. Minor false positives are also observed which might be due to inaccuracy of the updated intact model and/or noise pollution. Nevertheless, the overall performance of the proposed two-stage model updating and damage identification framework is satisfactory. In addition, Figure 3 depicts the updated modal quantities in comparison with the ground truth, which shows accurate prediction. We further analyze the effectiveness of Bayesian hyperparameter optimization. A comparison between Bayesian optimization and the case of manually selected constant is illustrated in Figure 4. It can be seen that two tentative trials of with 0.1 and 0.2 provide less satisfactory identification with multiple false positives and large uncertainties, especially for elements 8, 9 and 10. In contrast, Bayesian hyperparameter optimization heuristically determines in each iteration of the sensitivity analysis (see Figure 4(b)) leading to more accurate damage identification.

Figure 5: Illustration of the 31-bar plane truss with sensor locations. White noise are imposed on the vertical direction and the horizontal direction of Node 5 and Node 7, respectively.
Figure 6: Stiffness variation coefficient after model updating/sparse damage identification using one measurement with 10% RMS noise for the truss structure: (a) the Bayesian posterior mean reliably estimates the majority of stiffness variation in the intact model, with a correlation coefficient of 0.93; (b) the STLS very accurately capture the sparsity pattern in the damaged stiffness, with a correlation coefficient close to 1.
Figure 7: Convergence of in model updating and sparse damage identification for the truss structure. In this case, both and stabilize in very few iterations.

3.2 Numerical Example: A 31-Member Truss Structure

Here we consider a large advertising stand, modeled as a 31-bar simply-supported truss structure Li ; Chen as shown in Figure 5, with its material/geometry properties given as follows: elastic modulus 70 GPa, cross-section 25 cm and material density 2770 kg/m. The damping ratios for the first two dominant modes are 1% and 2%. To implement model updating, we assume there’s a random variation in for each element’s stiffness. To showcase damage detection, the stiffness of Bar 1 reduces by 20% while the stiffness reductions of Bars 15 and 27 are 15%. The structure is excited by a white noise force in the vertical direction of Node 5 and the horizontal direction of Node 7 at the same time as shown in Figure 5. In this example, we tend to explore our framework’s performance using scarce and noisy data. In particular, biaxial acccelerometers are deploy at Nodes 2, 3, 5, 8, 9, 12 and 13, which record the structural response for 60 s at 1400 Hz. Only one set of measurement under 10% RMS noise is recorded (for intact model updating and damage identification, respectively) and processed by OKID/ERA Juang ; Juang1985 for modal extraction.

Figure 6 shows the result for both intact model updating and damage identification. It can be seen from Figure 6(a) that the updated stiffness variation parameters in general matches well the ground truth with acceptable discrepancies (e.g., the correlation coefficient is 93%). The large-level of noise and the limited number of sensors cause less satisfactory identification of parameters with small values. Nevertheless, most of the parameters have small variance showing reliable robustness. For those with large uncertainties, more tests and sensors may be helpful to reduce the bias. In the stage of damage identification, it is encouraging that the STLS successfully localizes the three damaged elements while accurately quantifying the damage extents. Since we only utilize one set of measurement, we cannot provide statistical mean and variance for the STLS result. Figure 7 shows the convergence histories of (a) the proposed Bayesian learning for intact model updating and (b) the STLS regression for damage identification. It appears that both methods converge very quickly with only a few number of iterations, in which the convergence tolerance is set to be a relative error of .

Figure 8:

Comparison among STLS, LASSO and ridge regularizations for sparse damage identification for the truss structure. The STLS is in close proximity to the true sparse pattern. LASSO manifests similar competence in spite of a handful of false positives. Ridge regression appears to have the smoothest result.

Figure 9: The 8-story steel frame: (a) the experimental structure; (b) lateral views of the structure; (c) the condensed shear-type model. Seismic motions are applied along the weak direction. The triangles indicate the accelerometers.

The performance of different regularization techniques is also compared with the damage identification result shown in Figure 8. Assuming an exact intact model, we apply STLS, LASSO and ridge regression, which represent , and regularization, to find the best approximation for the sparse damaged model from the 10% RMS noise measurement. Ridge regression Marquardt , also known as Tikhonov regression Tikhonov , is a penalized multicollinearity in data by adopting an penalty. It turns out STLS correctly identifies both the damage location and extents. LASSO comes second very closely, given that there are several false positives , especially for Element 8 which could be distracting. The Ridge regression provides over-smooth result . It is quite clear that, despite the two principle damages in Elements 1 and 15, we are very likely to conclude from Ridge regression result that Elements 8, 22, 27, 29 and 30 all have notable damages. More importantly, the stiffness variation in Element 27 is incorrectly identified. These indicate that regularization tends to regularize every element and balances weights across all elements. Consequently, it is not as competitive as STLS or LASSO for sparse damage identification. Overall, the relative -norm identification errors for STLS, LASSO and Ridge are 3%, 26% and 41%, respectively. This comparison demonstrates the numerical advantage of regularization for solving sparse regression problems such as damage detection.

3.3 Experimental Example: An 8 DOF Steel Frame

A shake table test of an 8-story steel frame model (see Figure 9) performed in the National Center for Research on Earthquake Engineering in Taiwan Yu ; SunMSSP2016 ; Chen is adopted to further verify the proposed model updating and damage identification approach. The total height of the structure is 8 330 mm and the size of diaphragms is 430 mm by 450 mm. The story mass is about 75 kg after counting in a 50 kg steel block on each floor for stabilization. The story lateral stiffness is estimated to be 180 kN/m. Due to its dominant shear component, we approximately condense an ETABS nominal model into a shear-type structure. The condensed stiffness matrix Sun2017 ; SunSCHM2018 is obtained by applying a unit lateral load at each floor and then inverting the resulting flexibility matrix. Notwithstanding that the condensed model has discrepancy as compared to the actual system. Moreover, structural damages were intentionally created by loosening bolts connecting adjacent columns. We consider two damage cases: in Case 1, connection bolts on the first floor are loosened; in Case 2, bolts on the first and the second floors are loosened.

Fixed on a hydraulic uniaxial shake table, the structure was monitored under 9 earthquake records in the weak direction (subjected to El Centro, Chi-Chi and Kobe earthquake excitation under different scales). Acceleration time histories were recorded on each floor. Although complete data are available, only the acceleration at the first, third, fifth and eighth floors along with the ground motion are used herein. Once again, we use OKID/ERA Juang ; Juang1985 to identify the modal frequencies and shapes for the first three modes.

We first update the stiffness variations parameters for the intact model using Bayesian learning. Figure 10(a) shows the distribution of the updated parameters with 95% confidence intervals. It can be observed that although Elements 5 and 7 have relatively larger variance, the rest estimates are more reliable. Hinged on the updated mean values of , we perform damage identification using the proposed STLS regression approach. Figure 10(b)-(d) shows the identified stiffness reduction of 31.54% and 43.94% in Elements 1 and 2 for Case 1. This aligns with our expectation that only the stiffness of the first and the second columns should have major reduction due to the bold loosening. Even though Elements 5 and 7 have large variance in model updating, we still obtain a satisfactory sparse damage identification result. On one hand, the result illustrates that our proposed framework has an agreeable prediction of the mean value in this case; on the other hand, it turns out that our STLS method can identify the essential pattern from the measurements and leave out minor redundancy due to noise and some modeling errors. Figure 10(c) and (d) show the quantified uncertainties for stiffness parameters with reduction.

Figure 10: Model updating and sparse damage case 1 for the steel frame: (a) stiffness variation from the initial model to the intact model; (b) stiffness variation from the intact model to the damaged model in case 1;(c) stiffness distribution shift by 31.54% in Element 1; (d) stiffness distribution shift by 43.94% in Element 2.
Figure 11: Sparse damage case 2 for the steel frame: (a) stiffness variation from the intact model to the damaged model in case 2;(b) stiffness distribution shift by 27.1% in Element 1; (c) stiffness distribution shift by 64.44% in Element 2; (d) stiffness distribution shift by 19.2% in Element 3.

The damage identification results for Case 2 are summarized in Figure 11, where again their mean values meet our expectation (the loosened bolts would cause stiffness reduction in the first three stories) and the 95% confidence intervals of the first and third elements are relatively small showing a strong identification confidence. It can be seen from Figure 11 that the stiffness reduction rates for the first three elements are 27.1%, 64.44% and 19.2%. The second element suffers the most because both of its ends were loosened. The probabilistic distributions of identified stiffness parameters are given in Figure 11(b)-(d).

4 Conclusion

This paper develops a novel two-stage sensitivity analysis-based computational framework for both model updating and sparse damage identification. In particular, an Bayesian learning method is developed for intact model updating and uncertainty quantification. The updated model then serves as a baseline for damage identification. A sparse representation pipeline built on a quasi- method (STLS regression) is presented for sparse damage localization and quantification. While the smooth nature of Bayesian learning makes it preferable for largely populated damages, there are many cases (deterioration of connection rigidity) where sparse damages only occur at distinct locations, justifying the necessity for sparse identification. Nevertheless, a critical issue of STLS lies in how to choose the thresholding parameter which is very problem-dependent. An inappropriate selection of such a parameter likely leads to biased identification. To address this fundamental issue, Bayesian optimization together with cross validation is developed to intelligently fit STLS with data, which saves the computational cost of hyperparameter tuning and produces more reliable identification result. The proposed framework is verified by three examples (both numerical and experimental), including a 10-story shear-type building, a complex truss structure, and a shake table test of an eight-story steel frame. In all cases, the Bayesian learning method can reliably estimate the probable stiffness variation, while the STLS regression can localize the sparsely distributed damaged members and quantify the damage extents with high accuracy. The encouraging results set forward our future work to be focused on real-world applications of the proposed methodology.

5 Acknowledgment

We would like to thank the National Center for Research on Earthquake Engineering in Taiwan for sharing the shake-table test data.

6 Declaration of Conflicting Interests

The Authors declare that there is no conflict of interest.

References

  • (1) Mottershead JE, Link M and Friswell MI. The Sensitivity Method in Finite Element Model Updating: A Tutorial. Mech Syst Signal Process 2011; 25: 2275-2296.
  • (2) Xu B, He J, Rovekamp R and Dyke SJ. Structural parameters and dynamic loading identification from incomplete measurements: Approach and validation. Mech Syst Signal Process 2012; 28: 244-257.
  • (3) Sun H, Lus H and Betti R. Identification of structural models using a modified Artificial Bee Colony algorithm. Comput Struct 2013; 116: 59-74.
  • (4) Chatzi EN and Smyth AW. The unscented Kalman filter and particle filter methods for nonlinear structural system identification with non‐collocated heterogeneous sensing. Struct Control Health Monit 2009; 16: 99-123.
  • (5) Marwala T. Finite element model updating using computational intelligence techniques: applications to structural dynamics. Springer Science & Business Media, 2010.
  • (6) Tikhonov AN, Leonov AS and Yagola AG. Nonlinear ill-posed problems. London: Chapman & Hall, 1998.
  • (7) Hansen PC. Rank-deficient and Discrete Ill-posed Problems: Numerical Aspects of Linear Inversion. Siam, 2005.
  • (8)

    Beck JL and Katafygiotis LS. Updating models and their uncertainties. I: Bayesian statistical framework.

    J Eng Mech 1998; 124: 455-461.
  • (9) Zhang FL and Au SK. Fundamental two-stage formulation for Bayesian system identification, Part II: Application to ambient vibration data. Mech Syst Signal Process 2016; 66: 43-61.
  • (10) Behmanesh I, Moaveni B, Lombaert G and Papadimitriou C. Hierarchical Bayesian model updating for structural identification. Mech Syst Signal Pr 2015; 64: 360-376.
  • (11) Sun H, Feng D, Liu Y and Feng MQ. Statistical regularization for identification of structural parameters and external loadings using state space models. Comput Aided Civ Inf 2015; 30: 843-858.
  • (12) Sohn H and Law KH. A Bayesian probabilistic approach for structure damage detection. Earthquake Engng Struct Dyn 1997; 26: 1259-1281.
  • (13) Yuen KV and Ortiz GA. Multiresolution Bayesian nonparametric general regression for structural model updating. Struct Control Health Monit 2018; 25: e2077.
  • (14) Mao Z and Todd MD. A Bayesian recursive framework for ball-bearing damage classification in rotating machinery. Struct Health Monit 2016; 15: 668-684.
  • (15) Yin T, Jiang QH and Yuen KV. Vibration-based damage detection for structural connections using incomplete modal data by Bayesian approach and model reduction technique. Eng Struct 2017; 132: 260-277.
  • (16) Mao Z and Todd MD. A Bayesian Damage Prognosis Approach Applied to Bearing Failure. In: the 32nd IMAC, A Conference and Exposition on Structural Dynamics (ed Kerschen G), Brescia, Italy, July 1-5, 2012, Model Validation and Uncertainty Quantification, Volume 3, pp. 237-242. Springer, Cham.
  • (17) Ching JY and Beck JL. Bayesian analysis of the phase II IASC–ASCE structural health monitoring experimental benchmark data. J Eng Mech 2004; 130: 1233-1244.
  • (18) Simoen E, Moaveni B, Conte JP and Lombaert G. Uncertainty quantification in the assessment of progressive damage in a 7-story full-scale building slice. J Eng Mech 2013; 139: 1818-1830.
  • (19) Au SK and Zhang FL. Fundamental two-stage formulation for Bayesian system identification, Part I: General theory. Mech Syst Signal Process 2016; 66: 31-42.
  • (20) Yuen KV. Bayesian Methods for Structural Dynamics and Civil Engineering. John Wiley & Sons, 2010.
  • (21) Simoen E, De Roeck G and Lombaert G. Dealing with uncertainty in model updating for damage assessment: A review. Mech Syst Signal Process 2015; 56: 123-149.
  • (22) Huang Y, Shao C, Wu B, Beck JL and Li H. State-of-the-art review on Bayesian inference in structural system identification and damage assessment. Adv Struct Eng 2019; 22: 1329-1351.
  • (23) Huang Y and Beck JL. Hierarchical sparse Bayesian learning for structural health monitoring with incomplete modal data. Int J Uncertain Quan 2015; 5: 139-169.
  • (24) Huang Y, Beck JL and Li H. Hierarchical sparse Bayesian learning for structural damage detection: Theory, computation and application. Struct Saf 2017; 64: 37-53.
  • (25) Huang Y, Beck JL and Li H. Bayesian system identification based on hierarchical sparse Bayesian learning and Gibbs sampling with application to structural damage assessment. Comput Method Appl M 2017; 318: 382-411.
  • (26)

    Candes EJ and Tao T. Decoding by linear programming.

    IEEE T Inform Theory 2005; 51: 4203-4215.
  • (27)

    Rudy SH, Brunton SL, Proctor JL and Kutz JN. Data-driven discovery of partial differential equations.

    Science Advances 2017; 3(4):e1602614.
  • (28)

    Fox RL and Kapoor MP. Rates of change of eigenvalues and eigenvectors.

    AIAA J 1968; 6:2426-2429.
  • (29) Sun H and Büyüköztürk O. Identification of traffic-induced nodal excitations of truss bridges through heterogeneous data fusion. Smart Mater Struct 2015; 24:075032.
  • (30) Yan G, Sun H and Büyüköztürk O. Impact load identification for composite structures using Bayesian regularization and unscented Kalman filter. Struct Control Health Monit 2017; 24(5):e1910.
  • (31) Yan G and Sun H. A non-negative Bayesian learning method for impact force reconstruction. J Sound Vib 2017; 457:354-367.
  • (32) Chen Z, Zhang R, Zheng J and Sun H. Sparse Bayesian learning for structural damage identification. Mech Syst Signal Process 2020; 140:106689.
  • (33) Sun H and Betti R. A Hybrid Optimization Algorithm with Bayesian Inference for Probabilistic Model Updating. Comput-Aided Civ Infrastruct Eng 2015; 30:602-619.
  • (34) Bishop CM. Pattern recognition and machine learning. Springer, 2006.
  • (35) Rosenblatt M. A central limit theorem and a strong mixing condition. PNAS 1956; 42: 43.
  • (36) Zhang L and Schaeffer H. On the convergence of the SINDy algorithm. Multiscale Model Sim 2019; 17:948-972.
  • (37) Snoek J, Larochelle H and Adams RP. Practical bayesian optimization of machine learning algorithms. Adv Neural Inf Process Syst 2012; 2951-2959.
  • (38) Rasmussen CE. Gaussian processes in machine learning. In Summer School on Machine Learning 2003; 63-71.
  • (39) Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Series B 1996; 58: 267–288.
  • (40) Moore H. MATLAB for Engineers. Pearson, 2017.
  • (41) Yuen KV, Beck JL and Katafygiotis LS. Efficient model updating and health monitoring methodology using incomplete modal data without mode matching. Struct Control Health Monit 2006; 13: 91-107.
  • (42) Juang JN, Phan M, Horta LG and Longman RW. Identification of observer Kalman filter Markov parameters–Theory and experiments. J Guid Control Dynam 1993; 16: 320-329.
  • (43) Juang JN and Pappa RS. An eigensystem realization algorithm for modal parameter identification and model reduction. J Guid Control Dynam 1985; 8: 620-627.
  • (44) Lus H, Betti R and Longman RW. Identification of linear structural systems using earthquake‐induced vibration data. Earthq Eng Struct Dyn 1999; 28: 1449-1467.
  • (45) Li XY and Law SS. Adaptive Tikhonov regularization for damage detection based on nonlinear model updating. Mech Syst Signal Process 2010; 24: 1646-1664.
  • (46) Marquardt DW and Snee RD. Ridge regression in practice. Am Stat 1975; 29: 3-20.
  • (47) Sun H and Büyüköztürk O. Probabilistic updating of building models using incomplete modal data. Multiscale Model Sim 2019; 17:948-972.
  • (48) Yu LC and Lin TK. Application of bio-informatics based technology on structural health monitoring. Technical Report NCREE-10-005. National Center for Research on Earthquake Engineering, 2010.
  • (49) Sun H, Mordret A, Prieto GA and Toksöz MN and Büyüköztürk O. Bayesian characterization of buildings using seismic interferometry on ambient vibrations. Mech Syst Sig Process 2017; 85:468-486.
  • (50) Sun H and Büyüköztürk O. The MIT Green Building benchmark problem for structural health monitoring of tall buildings Struct Control Health Monit 2018; 25(3):e2115.