Anomaly-Sensitive Dictionary Learning for Unsupervised Diagnostics of Solid Media

05/11/2014 ∙ by Jeffrey M. Druce, et al. ∙ University of Minnesota 0

This paper proposes a strategy for the detection and triangulation of structural anomalies in solid media. The method revolves around the construction of sparse representations of the medium's dynamic response, obtained by learning instructive dictionaries which form a suitable basis for the response data. The resulting sparse coding problem is recast as a modified dictionary learning task with additional spatial sparsity constraints enforced on the atoms of the learned dictionaries, which provides them with a prescribed spatial topology that is designed to unveil anomalous regions in the physical domain. The proposed methodology is model agnostic, i.e., it forsakes the need for a physical model and requires virtually no a priori knowledge of the structure's material properties, as all the inferences are exclusively informed by the data through the layers of information that are available in the intrinsic salient structure of the material's dynamic response. This characteristic makes the approach powerful for anomaly identification in systems with unknown or heterogeneous property distribution, for which a model is unsuitable or unreliable. The method is validated using both synthetically

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The past three decades have witnessed the advent of a variety of structural diagnostics methodologies based on guided waves Staszewski et al. (2004); Rose (2002). The detection and triangulation principle underlying all methods based on guided waves (the pulse-echo or pitch-catch approaches Ihn and Chang (2008)) can be seen as a structural analog of the radar

problem: waves are generated and received by transmitter-receiver pairs distributed over the test specimen, a signature of wave scattering is captured along each transmitter-receiver path, and the position of the defect is subsequently triangulated using data from multiple transducers. Numerous efforts have been dedicated to the construction of damage location estimators from measurements acquired by sparse arrays of sensors. Popular approaches include: statistical methods 

Flynn et al. (2011), acoustic imaging techniques Michaels et al. (2005)

, singular value decomposition 

Liu et al. (2012); Lu et al. (2008), spatial optimization of the sensor networks Wang and Yuan (2009), and methods based on the time reversal operator Prada and Fink (1994); Foroozan and Asif (2011).

Pitch-catch methods allow anomaly triangulation using parsimonious sensors data, which makes them ideally suited for online or in-situ SHM applications Kessler et al. (2002); Kirikera et al. (2011), where it is crucial that the acquisition system is highly portable and easily deployable. On the other hand, they, like all methods based on the radar paradigm, can suffer major weaknesses when the assumptions on the ideality of the medium are relaxed - a common scenario in the context of aging materials and damage formation. These methods rely in fact on the possibility to detect the individual scattered signals, estimate with some precision the associated times of flight, and finally triangulate the scattering sources by applying some direct knowledge of the medium properties (e.g., the wave speed). These tasks are often hard to accomplish in the case of highly heterogeneous media, materials with extreme internal complexity (e.g., random microstructures) or materials experiencing severe property degradation, for which a material model is either unknown or unreliable.

Recently, the field of structural diagnostics has been flooded with methodologies originally developed within the field of machine learning (ML) 

Farrar and Worden (2013)

. The picture that emerges is one of a fast-growing field that is quickly incorporating inputs from parallel disciplines. Examples of ML techniques used in support of wave-based diagnostics include neural networks 

Zhongqing and Ye (2004), matching pursuit decomposition Lu and Michaels (2007); Das et al. (2005, 2009); Mallat and Zhang (1993)

, support vector machines 

Das et al. (2007) and compressive sensing Azimipanah and ShahbazPanahi (2013)

. It is worth pointing out that the majority of the existing ML approaches for diagnostics involve supervised learning techniques for feature classification, whose use may be hindered by the need for large training data sets and databases.

In parallel, another powerful class of diagnostic methodologies has stemmed from the availability of laser-based acquisition systems Sharma et al. (2006); Michaels et al. (2011). By means of a Scanning Laser Doppler Vibrometer (SLDV) it is possible to perform non-contact measurements of the velocity of points belonging to a (potentially very dense) scanning grid defined on an object’s surface, which enables full spatial reconstruction of its vibration or wave response. A number of dedicated image processing techniques have been developed in conjunction with laser experiments to meet desired identification and visualization criteria; among them are methods based on space-time DFT Basri and Chiu (2004); Alleyne and Cawley (1991), wavenumber-space filters Ruzzene (2007) and Laplace filters Sohn et al. (2011); An et al. (2013). In sheer contrast with the radar triangulation approach, laser-based diagnostics promote a different paradigm where the inference is performed directly on a data-rich, spatially reconstructed response. While the acquisition of richer data poses additional challenges in terms of sensing system requirements, it opens new avenues for inference strategies with superior accuracy and robustness.

The possibility to turn the attention from analyzing sparse time history data to operating upon spatially reconstructed waveforms, coupled with the availability of algorithms to mine complex data structures, represents a significant shift in perspective for the diagnostics problem. At the core of this new approach is the notion that, from a data standpoint, a wavefield is essentially a data cube, slices of which represent snapshots of the dynamic response at different time instants. Treating an evolving wavefield as a collection of images immediately presents the opportunity to revisit the anomaly detection problem with the mindset and methodology of image processing and computer vision (CV) 

Itti et al. (1998); Itti and Koch (2001); Yan et al. (2010); Shen and Wu (2012). For example, the problem of detecting anomalies in the physical medium has a data equivalent in the problem of identifying atypical patterns in the data structure Patcha and Park (2007); Chandola et al. (2009). One of the essential concepts behind this approach is the notion that, in every region of the domain that is sufficiently far from a defect, the displacement time histories will exhibit some similar, but unknown, “typical” behavior, while the time histories recorded in spatial regions in the immediate vicinity of a defect will exhibit some (also unknown) signature of the defect that is different from the typical response observed in the bulk of the domain. The regions exhibiting atypical behavior are referred to as salient. When only a few regions exhibit atypical behavior, the notion of saliency can be viewed as a generalization of the concept of sparsity, which has played a central role in signal processing, statistics, and machine learning research in recent years; see, e.g., Chen et al. (2001); Candes et al. (2006); Donoho (2006); Candès and Tao (2006); Haupt and Nowak (2006); Tropp (2004, 2006); Bruckstein et al. (2009). In the context of wave-based structural diagnostics, an approach based on notions of saliency has been recently investigated in Gonella and Haupt (2013).

The concept of sparsity is also at the core of recent efforts in dictionary learning methods (e.g., Olshausen and Field (1997); Lewicki et al. (1998); Kreutz-Delgado et al. (2003); Aharon et al. (2006); Mairal et al. (2010)), whose objective is to seek a sparse decomposition of a data set in terms of a basis (typically, but not necessarily, overcomplete) that is learned directly from the data. The idea of representing a signal using a small set of atoms of a learned dictionary instead of prescribed basis functions, such as sinusoids or wavelets, has shown to be a powerful and versatile tool in many signal and image processing applications, including image denoising and restoration, texture synthesis and classification Mairal et al. (2008); Peyré (2009); Ramirez et al. (2010), audio classification Zubair et al. (2013) and source separation Rambhatla and Haupt (2013), and medical imaging Li (2012).

In this paper, we introduce a sparse coding approach to the structural diagnostics task based on the reinterpretation of the dictionary learning problem as a generalized, anomaly-sensitive form of modal analysis. We show that we can steer the outcome of the sparse coding problem toward the identification of anomalous features in the medium through the introduction of a data model whose parameters embody features associated with certain morphological, structural and/or behavioral characteristics that we presume should be exhibited in the response data. The resulting inference problem is fully model-agnostic and baseline-free, in that it does not require any a priori knowledge of the material model of the medium (e.g., governing equations, material properties); therefore our proposed approach is well suited to study media whose material model is unknown, due for example to heterogeneity in the property distribution, or unreliable, as in the presence of material degradation extended over large portions of the domain. The approach is also database-free, as the construction of the data model is unsupervised, i.e., does not involve any training sets.

The remainder of the paper is organized as follows. In Section II we present the dictionary learning paradigm as a natural (and versatile) generalization of conventional modal analysis. In Section III we show how the sparse coding problem can be modified to capture localized features in the response data. The effectiveness of the method is demonstrated in Section IV using synthetic data as well as data acquired using a scanning laser Doppler vibrometer. In Section V we provide some final remarks and we discuss a few possible directions for future work.

Ii Dictionary Learning as Generalized Modal Analysis

In the context of structural dynamics, it is well known that the arbitrary motion of an undamped continuous system can be described as a linear combination of its natural modes, i.e., the dynamic displacement field can be expressed as

(1)

where are the mode shapes, which are functions of the position vector , are the natural frequencies of the system, and are modal amplitudes that are determined from the initial conditions of the problem. If the summation in Eq. 1 is truncated to terms, it yields an approximation of the displacement field. If we consider a discrete system with degrees of freedom (e.g., a lumped-parameter system or a continuous system upon spatial discretization), whose response is sampled at discrete time instants, we can update Eq. 1 and write the motion at a time instant as

(2)

where is an array of degrees of freedom and ’s are arrays representing the vectorized discrete mode shapes. We note that the exact response aggregates modal contributions, i.e., the number of modes equals the number of degrees of freedom. The modal decomposition is often recast in the form of a matrix multiplication as , where is the matrix of the response data, containing a length discrete time history for each of the degrees of freedom, is the (square) modal matrix whose column corresponds to the vectorized mode shape , is a matrix whose row corresponds to the time-harmonic evolution of the modal coordinate, and the modal amplitudes are assumed absorbed into the rows of . In short, the modal decomposition can be expressed as a representation in terms of a purely spatial component (a matrix of discrete mode shapes) and a purely temporal one (a matrix of discrete harmonic functions). This decomposition is schematically illustrated in Fig. 1.

Figure 1: Matrix representation of conventional modal decomposition. Shaded columns of represent spatial functions associated with the digitized time functions appearing as shaded rows of .

As a generalization of this approach, we may wish to construct an alternative approximate representation of the response which is exclusively learned from the response data (without the need for direct knowledge of the system’s mechanical properties) but, unlike modal decomposition, relaxes a priori assumptions on the mathematical form of the basis functions. In analogy with the description in terms of modes, we seek a representation that still involves a pair of spatial and temporal matrices, but in which the orthogonality constraint on the columns of the spatial matrix has been relaxed. In compact form, we write

(3)

where the matrix is a dictionary matrix, whose columns (called atoms) form a set of spatial basis functions, and is a corresponding coefficient matrix, whose rows may still be viewed as digitized functions of time. In this representation, the columns of play a role that is functionally analogous to the mode shapes , therefore we refer to the atoms of as “pseudo-mode” shapes of the system.

Factorizations of the form of Eq.(3) are, of course, not unique without further qualifications; indeed, even the discretized form of standard modal decomposition may be written using this formalism. In what follows, we adopt this type of general factorization model in order to facilitate “tuning” of our learned data representations, so that they become inherently sensitive to wave propagation characteristics that are indicative of material defects. We can encapsulate such characteristics in terms of the structure(s) that we prescribe to or enforce upon the learned factors and . Suppose that we specify classes of candidate dictionaries and candidate coefficient matrices, denoted by and respectively, so that each and each exhibits structural characteristics that we wish to impose on the dictionary atoms and their coefficients. Then, given , the aim of the representation task becomes to find specific factors and , such that . Formally, our approach will be to identify these factors by solving a (constrained) version of a least-squares problem of the form

(4)

where and denote the -th columns of and , respectively, and the notation denotes the squared , or Euclidean, norm.

This line of thinking is motivated by recent efforts in dictionary learning Olshausen and Field (1997); Lewicki et al. (1998); Kreutz-Delgado et al. (2003); Aharon et al. (2006); Mairal et al. (2010) whose objectives are factorizations characterized by dictionaries that may be overcomplete (having more columns than rows) with corresponding coefficient matrices that are sparse (having relatively few nonzero entries). In terms of (4) above, dictionary learning tasks may be described as optimizations over a set of matrices having rows and some user-specified number of columns (say ), and a corresponding set of coefficient matrices having no more than non zeros per column. Enforcing sparsity on the columns of may be accomplished by imposing a set of constraints of the form for all , on the elements , where the notation denotes the or counting norm of , which essentially measures how many of its entries are nonzero111Strictly speaking, the function does not satisfy all of the required characteristics for it to be a proper norm. In particular, it fails the homogeneity property, in that for a vector we do not have for all constants . Nevertheless, it has become common in the sparse inference literature to use the norm descriptor for this function; we adopt the same convention here., and is a specified sparsity level.

Optimizations of this form (having constraints) are well-known to be combinatorial in nature. Thus, modern dictionary learning efforts either resort to greedy methods Aharon et al. (2006), or attempt to relax the constraint on each column to yield computationally tractable constraints. Often, this entails replacing the norm arising in the constraints on the columns of by its closest convex surrogate Mairal et al. (2009). In terms of the terminology introduced for our problem, this latter approach would prescribe casting the dictionary learning task in terms of an optimization of the form

(5)

where is the set of all matrices, is the set of matrices satisfying for each column , and denotes the norm of . Here, is a (user-specified) regularization parameter that trades off a global “goodness of fit” of the approximation, as quantified by the term in the objective function, with the term on the columns of ; larger values of tend to result in sparser having fewer non zeros per column. We now propose an extension of the (sparse) dictionary learning paradigm, in which we incorporate additional structural characteristics into the dictionary atoms to be identified, so that they be especially receptive to local deviations in the measured wavefields (and in turn, highly receptive to the wave propagation characteristics in the neighborhoods of local material anomalies). We motivate and describe this approach in the following section.

Iii Dictionary Learning for Anomaly Detection

iii.1 Dictionary learning for local feature extraction

The objective of anomaly detection is to discover regions that behave abnormally, i.e., whose behavior significantly deviates from that of their surroundings. In the realm of structural diagnostics, the term anomaly encompasses a wide variety of geometrical and material abnormalities, including damage zones, manufacturing defects and inclusions. In this discussion, we pursue the inference and triangulation of structural anomalies through the analysis of the structure’s dynamic response, which consists of a collection of displacement time histories acquired at a set of discrete points on the structure’s surface. Our contention is that it is possible to detect the signature of these anomalies by learning appropriate dictionaries of the response data and by decoding the spatial information contained in their data structure.

We assume that the regions containing physical anomalies are characterized by local perturbations of their acoustic properties (e.g. elastic moduli, density); these, in turn, induce localized features in the response, which are reflected, although possibly difficult to detect, in the kinematic time histories of the material points which lie inside the anomalous regions. If we invoke the interpretation of the atoms of a dictionary as pseudo modes (i.e., spatial descriptors of deformation), we can assume that the effect of localization would manifest as spiky regions in one or more of the atoms, which, from a data standpoint, would correspond to sparse structures in the columns of the dictionary. We conclude that, in order to equip a dictionary with the ability to detect anomalies, we need to formally enforce some kind of sparsity constraint on its atoms. On the other hand, the localized features associated with the anomalies coexist in the response with smoother fields describing the global (and dominant) behavior of the structure; therefore, a dictionary that properly captures an anomaly is unlikely to provide a sufficiently accurate representation of the response field as a whole, and viceversa. In order to reconcile this dichotomy, we propose a two-dictionary representation of the form

(6)

as illustrated in Fig. 2.

Figure 2: Schematic illustration of the two-dictionary approximate representation: with , .

This can be alternatively interpreted as a decomposition of into two separate dictionaries, so that , where the dictionary is dedicated to capture the localized features of the response and the dictionary guarantees that the bulk response is sufficiently well approximated. Each dictionary is obtained through a dedicated optimization problem with appropriate constraints: specifically, for a user-specified constant , we impose on the atoms of the sparsity-promoting constraint for each column , while we retain the original column-wise constraints from the original dictionary learning formulation on the columns of (that for all ). As we will see, the constraint on tends to yield diffuse atoms, which de facto display spatial smoothness, while the additional -based constraint promotes sparsity on the columns of . Together, and form a representation that encompasses the dominant, smooth dynamic behavior as well as the spatially sparse signature of potential anomalies. The search for is done iteratively as detailed in Algorithm 1.

Input: Data cube
           Number of atoms in dictionaries:
           Regularization parameters:
           Sparsity parameter
           Increment parameter
Output: Dictionaries (sparse) and (diffuse)
Initialize:
(7)
subject to
while   do
        
      
      
(8)
subject to
      
(9)
subject to
end while
return
Algorithm 1 Iterative procedure to learn two dictionaries (sparse and diffuse) from the response data.

In the minimization problem of Eq. 9, the constraint on the individual columns of dictionary are of the form . Note that the parameter effectively governs the number of non-zero terms in each atom. The larger , the more stringent the sparsity constraint and the more zero elements in each atom, which in return makes the spatially sparse features more prevalent in the atoms of . However, if is too large, the constraint becomes excessively stringent and the cost function is trivially minimized with . On the other hand, if is too small, the constraint is not sufficiently invoked and the atoms of effectively lose the ability to highlight any localized features. The determination of the appropriate value of is computed iteratively starting with a low initial guess and increasing it by an amount until the columns of have a sufficiently low number of non-zero entries. In our implementation, the solution of the individual dictionary learning problems involved at each stage of iteration is carried out using the open-source sparse modeling software SPAMS (available at http://spams-devel.gforge.inria.fr).

Let us recall at this point that the role of the regularization parameter in the cost function of each dictionary learning problem is to control the trade-off between the accuracy of the approximation and the achievement of a parsimonious dictionary representation involving few atoms. In the present implementation, the selection of

has been conducted following a heuristic approach such that the error in the approximation remained below acceptable bounds (

of ).

iii.2 A super-atom approach to capture persistence and enhance anomaly detection

Important parameters of the algorithm are the user-defined parameters and , which represent the assumed number of atoms in and , respectively. If we let be large, we have the opportunity to examine a high number of sparse atoms. This richness of sparse descriptors can be exploited to better determine if the system contains an anomaly. It is, in fact, possible that some sparse atoms may not display the “true” physical anomalies, but rather other spurious short-wavelength features, e.g. cusps associated with boundary effects, which may manifest as localized features on the perimeter of the structure, or sharp artifacts due to noise. Nevertheless, we note that the true anomalies are a persistent feature in the sparse dictionary, i.e., features that are consistently observed across the set of sparse atoms and at consistent locations within each atom. In order to capture this persistence attribute, we propose a post-processing aggregation step designed to intelligently aggregate layers of data from multiple atoms in a way that emphasizes the most persistent features. The result of this step is a kind of super-atom that highlights spatial locations where persistent activity is present across a significant number of the sparse dictionary atoms. The construction of the super-atom proceeds as follows. We consider a partition of the domain into rectangular (identical in size and shape) regions. Since the atoms spatially span the entire domain, they are all partitioned in similar fashion, such that denotes the partition of the atom. For each partition, we sweep the atoms of the sparse dictionary and we check if a feature is consistently observed in that partition across the set, by counting how many atoms contain at least one non zero entry inside the selected partition. If this number is sufficiently large, we aggregate local contributions from all the atoms in the dictionary to form the corresponding partition of the super-atom. Note that this criterion weighs (possibly relatively mild) contributions that are observed over a large number of atoms more heavily than others that may be prominent (amplitude wise), but are observed only in a few atoms. This reflects the notion that the signature of physical anomalies is often elusive but persistent across the dictionary, while spurious sharp features, which can dominate the response amplitude wise, are inconsistently detected across the dictionary. The construction of the super-atom (summarized in Algorithm 2) features two parts: the first implements the atom aggregation procedure; the second performs a search over the identified partitions, with the objective of automatically identifying, through amplitude thresholding, the partition containing the anomalies. This last step is meant to forsake the need for visual inspection of the super-atom and de facto makes the anomaly identification fully automatic.

Input: Sparse Dictionary
           Domain partition sizes:
           Sparsity thresholds
Output: “Super-atom” (with vectorized partitions )
Initialize:  ;
                  for
Partition: Divide each , , into length sub vectors for  to  do
       if   then
            
       end if
      for  to  do
             if   then
                  
             end if
            
       end for
      
end for
return
Algorithm 2 Construction of super-atom aggregating contributions from different sets of sparse atoms according to criteria of persistence. The function denotes the indicator function, which takes the value when the event specified in the subscript is true, and otherwise.

Note that there may be instances where the algorithm does not find any partition that contains any sufficiently persistent localized features; this scenario corresponds to the case of a pristine structure. The ability to avoid false positives in anomaly-free scenarios is a major strength of this methodology, in contrast with methods that do not explicitly control for false positives, e.g. our own previous work Gonella and Haupt (2013)

, (which utilized a simpler, principal components analysis-based anomaly detection and localization method).

The benefits of a super-atom representation are fully realized when we consider problems in which the identification of an anomaly is impossible, or ambiguous, even through the prism of the sparse atom dictionary. This scenario is encountered when the signature of the anomaly is very small, due, for example, to a critically small size of the defect or to competition (in terms of sparsity) from spurious response features, or in dealing with noisy data. In short, the signature of an anomaly may be minute, and hence not visually detectable, in the individual atoms, but can become very clear through its aggregation over many atoms.

It is worthwhile to note that our super-atom post-processing method is a bit of a departure from existing dictionary learning approaches, which typically utilize directly the atoms identified by the learning procedure(s) without further refinement. Our motivation for adopting this additional step is twofold. First, as described above, the approach seeks to identify spatially persistent features in the response data by aggregating (in a nonlinear manner) features identified in the atoms of the sparse dictionary. In addition, we note that while dictionary learning problems are easy to motivate and specify, their highly non-convex nature makes their exact numerical solution computationally challenging. In practice, existing algorithmic approaches rely either on convex relaxation and alternating optimization or greedy methods and can only be guaranteed to converge to local minima of the corresponding objective function. In this sense, our post processing step may be viewed as an augmentation designed to glean additional information from the computational solution of the dictionary learning problem.

Iv Local Feature Extraction in a Transient Wavefield

In this section we provide several numerical demonstrations of the proposed anomaly identification method, on both simulated data and experimental data obtained using a scanning laser Doppler vibrometer.

iv.1 Results from numerical simulations

First we test the approach against the problem of circular-crested transversal waves excited in a thin plate by an out-of-plane point force applied at one node. We consider flexural waves modeled according to Mindlin’s plate theory. The choice of flexural waves is here primarily motivated by the simplicity of the corresponding finite element simulation and the inherent simplifications resulting from having a single-mode wave solution. Nevertheless, since the method is based on the elaboration of the spatial patterns of time-evolving wavefields, without invoking any specific physical characteristics of the waves, the analysis would hold for other types of out-of-plane waves, such as Lamb waves and Rayleigh waves, and even for longitudinal and shear waves in thin structures exhibiting some in-plane deformation on the structure’s surface. We use the finite element method (FEM) to construct the stiffness and mass matrices of the system and employ a time marching scheme to simulate the propagating wavefield. Our virtual specimen is a rectangular, thin Aluminum sheet with dimensions and thickness and the following material parameters: Young’s modulus E = 71GPa, Poisson ratio = 0.33, density . The domain is discretized with a structured mesh comprising 400 200 square elements, which is verified to be sufficiently fine to avoid spurious numerics-induced dispersive effects or artificial noise in the data. The excitation frequency is a 5-cycle tone burst with carrier frequency , which induces a wavefield with wavelength such that . At the end of simulation, our data cube consists of a series of snapshots of the propagating wavefield at the selected time instants.

The anomalies are introduced in the model by relaxing the Young’s modulus of the material by two orders of magnitude within small regions of the domain. This can model the effects of partial holes, soft inclusions or localized regions with degraded material properties. The anomalies behave as scatterers, which act as localized sources within the domain that are triggered after some delay with respect to the applied excitation. As the point source of excitation is itself a localized feature, we expect it to pose some ambiguity for the sparse coding algorithm. In order to (partially) filter out this effect and discriminate between the “true” anomalies and the excitation, we truncate the first time instants of the response (corresponding to the early stages of propagation), in which we expect the response to be dominated by the excitation as the wavefield is localized in the neighborhood of the excitation point. For the same reason, a -thick layer immediately close to the boundary where the excitation is applied, is a priori excluded from the analysis.

(a) Diffuse atoms ( random samples of -atom dictionary)
(b) Sparse atoms ( random samples of -atom dictionary)
Figure 3: Samples of atoms from diffuse and sparse dictionaries for a numerically generated wavefield with point anomaly. Arrows are used to assist the visualization of the spikes corresponding to the anomaly.
(a) Schematic of plate with point defect
(b) Super-atom highlighting anomalous region
(c) Schematic of plate with two point defects
(d) Super-atom highlighting anomalous regions
(e) Schematic of plate with point and line defect
(f) Super-atom highlighting anomalous regions
Figure 4: Anomaly detection and triangulation through the prism of super-atom representation. Point and line anomalies are successfully extracted from the wavefields and triangulated.

In Fig. 3 we show a sample of 9 atoms of a low-dimensional () diffuse dictionary and a sample of 9 randomly selected atoms of a higher-dimensional () sparse dictionary for a case with a single point anomaly of coordinates . The diffuse atoms (Fig. (a)a) essentially capture characteristic snapshots of the wavefield at different instants of simulation. In contrast, the sparse dictionary (Fig. (b)b) captures a spatially rarefied representation of the wavefield, in which several atoms display a distinguishable and isolated signature of the anomaly in the form of a localized spike in their topology; this signature is, however, quite elusive, as the anomalous feature is not ubiquitously observed across the entire set and the task of discriminating it from other speckles in each atom is prohibitive. As expected, the inference benefits vastly from a super-atom representation (Fig. (b)b), in which the sparse features are weighted according to their persistence across the entire sparse dictionary. The sporadically occurring features that contaminate the sparse atoms are now filtered out and we are able to clearly pinpoint the anomaly, as visible from the comparison with the schematic of Fig. (a)a. We note that the anomaly can even be triangulated without visual inspection as its host partition is automatically identified by the algorithm (and highlighted in the figure by a thicker border). In the remaining portion of Fig. 4 we further explore the performance of the method against cases with more challenging anomaly landscapes. In Fig.s (c)c and (d)d we show the super-atom performance for a plate with two defects, where both anomalies are correctly detected and triangulated. The final case in Fig.s (e)e and (f)f tests whether a crack, which in this case can be thought of as a contiguous collection of anomalies, can be identified even in the presence of another scatterer. The crack is simulated by reducing the Young’s modulus and density of the material by several orders of magnitude inside a two-element thick horizontal layer extending between points and . The signature of the crack is well highlighted in the super-atom; if we relax the search criterion and we let the algorithm find the 4 most anomalous partitions, we are able to automatically identify the entire length of the defect.

iv.2 Results from experimental data

We now test the efficacy of the dictionary learning algorithm against experimental data through the benchmark problem of a thin plate with a localized defect, the physical analogue of the numerical case presented in the previous subsection. Two cases, using a similar specimen but with considerably different types of anomalies, are presented. Here we consider a square Aluminum sheet of dimensions and thickness and we apply a tone burst excitation with carrier frequency . The induced wavefield is reconstructed from surface velocity data via a Scanning Laser Doppler Vibrometer (SLDV), the Polytec PSV-400-3D. Three scanning heads are used to shoot three independent laser beams onto each node of a predefined scanning grid, which allows capturing the out-of-plane and in-plane velocity components of the points on the plate’s surface; in this work, we limit our analysis to the out-of-plane velocity component. The plate surface is discretized with an approximately square scanning grid consisting of 120 120 scan points. The grid is deliberately chosen to be very fine to enable resolution of minute features in the wavefield; experiments with coarser grids can be easily performed a posteriori

by subsampling the data in space. To make the wavefield data more amenable to region partitioning, we apply linear interpolation to fit the original grid data to a Cartesian grid. Since surface roughness can cause signal dropouts, or temporary severe signal attenuation, built-in signal enhancement and speckle tracking features are enabled during the scan. Moreover we average 75 realizations at each scan point to filter optical and/or mechanical noise (signal stacking). Despite many precautions taken during the scan, the acquired signals are tainted by some mechanical and optical noise; the data is therefore cleaned by post-processing the time history signals with a band pass filter with a 200 kHz center frequency and a bandwidth of 100 kHz. Time spacing between acquisitions at different points is enforced by a repetition frequency of

, which allows for over 1000 boundary reflections between successive acquisitions and effectively allows for full attenuation of the wave before a new acquisition is made.

(a) Diffuse atoms ( random samples of -atom dictionary)
(b) Sparse atoms ( random samples of -atom dictionary)
Figure 5: Samples of atoms from diffuse and sparse dictionaries for a laser-acquired wavefield with strong scatterer. Arrows are used to assist the visualization of the spikes corresponding to the anomaly.
(a) Schematic of plate with highlighted anomaly (picture detail) and scanned area
(b) Super-atom of the scanned area successfully identifying the anomaly
Figure 6: Anomaly detection and triangulation through the prism of super-atom representation. The defect corresponding to the hole is successfully triangulated.

In the first experimental case we consider a localized anomaly introduced in the form of a circular, flat-bottomed cylindrical not-through hole, shown in detail in the schematic of Fig. (a)a. The hole is drilled on the back surface of the plate to a depth of approximately 0.0295 m - corresponding to a 75 reduction in local thickness - and is expected to behave as a strong scatterer. Nevertheless, it is worth pointing out that the scan is performed over a portion of the pristine front face of the plate (also highlighted in Fig. (a)a), which shows no visual evidence of the defect; the experiment mimics diagnostics conditions encountered in the inspection of certain thin-walled structures - such as mounted aircraft panels, pipes, etc. - for which it may be prohibitive to have direct optical access to the interior surfaces. In Fig. 5 we show the diffuse atoms and a randomly selected sub-sample of 100 computed sparse atoms. Similar to the numerical case, the diffuse atoms (Fig. (a)a) effectively display snapshots of the wavefield at different time instants of the experiment, while the sparse atoms (Fig. (b)b) distill the sparse, or localized features of the response. While the presence of a distinguishable localized feature is observed in several of the atoms at the location of the anomaly, the inference is again contaminated by other sparse features, which introduces an element of ambiguity. Through the lens of the super-atom, as seen in Fig. (b)b, the “true” anomaly is clearly displayed and highlighted.

(a) Diffuse atoms ( random samples of -atom dictionary)
(b) Sparse atoms ( random samples of -atom dictionary)
Figure 7: Samples of atoms from diffuse and sparse dictionaries for a laser-acquired wavefield with weak scatterer. Arrows are used to assist the visualization of the spikes corresponding to the anomaly.
(a) Schematic of plate with highlighted anomaly and scanned area
(b) Superatom of the scanned area identifying source and anomaly
Figure 8: Anomaly detection and triangulation through the prism of super-atom representation. The defect corresponding to the glue residual is successfully triangulated.

Our second experimental case is an intriguing testimony to the method’s ability to detect subtle changes induced in a wavefield by minute or superficial defects. The almost serendipitous way in which this detection was obtained speaks volumes about the agnostic capabilities of the method. A plate with identical specifications as above (without the drilled hole), unknowingly possessed a very thin, approximately 0.1mm-thick deposit of glue - remnant of an actuator that had been previously glued to its back surface (see picture detail in the schematic of Fig. (a)a). A preliminary scan was performed for the purpose of calibrating the equipment and obtaining a preliminary characterization of the wavefield. Since we were ostensibly dealing with a pristine plate, we expected a radially symmetric, unperturbed wavefield. Even though some marginal distortion potentially ascribable to a weak scatterer was detected upon visual inspection, the observed wavefield initially confirmed the prediction of an essentially pristine structure. However, when the data was fed to the sparse coding algorithm, a number of localized features were displayed in the sparse atoms (Fig. (b)b). One of them corresponds to the excitation point, which here lies in the middle of the domain - a region that is not a priori excluded from the analysis; this is not surprising, as excitation sources are inherently spatially localized features. However, we also noted that many of the sparse atoms contained a second feature above the point of excitation. This was emphasized in the super-atom representation (Fig. (b)b), which indeed suggests the presence of a (weak) scatterer. Further inspection of the plate back surface revealed the presence of the aforementioned defect at the location indicated by the super-atom.

V Concluding Remarks and Future Work

In this work we have introduced a methodology to detect and triangulate anomalies in the response of solid media based on criteria of spatial sparsity and enabled by sparse coding algorithms. The method represents, to the best of our knowledge, the first attempt to use structurally-tuned dictionary learning algorithms in the context of structural diagnostics and, in general, one of the few existing efforts aimed at performing anomaly detection from spatially reconstructed wavefields using techniques of computer vision and image processing. We have shown that the method can identify spatially localized anomalies in the data fields through a decomposition of the response in two sets of pseudo modes: diffuse ones, which capture the smooth part of the response, and sparse ones, which contain the signature of the anomalies. The sparse coding algorithms have been complemented with the assembly of super-atoms that intelligently aggregate the information from the sparse dictionaries to filter the localized features that are due to physical anomalies from those due to noise or other competing boundary-induced or numerics-induced mechanisms.

The method is crowned by a postprocessing feature that allows a convenient virtual decomposition of the domain in rectangular partitions for automatic identification of the regions containing the anomalies. The benefits of an automatic interpretation that bypasses the need for direct visual observations of the wavefield (or of the atoms of its dictionaries) are felt in the context of possible multi-step sampling and detection procedures, where the inference would be made in several stages conducted over nested sub-domains, and with increased accuracy, to iteratively identify smaller and smaller subsets of the material domain that may contain anomalies. This sort of adaptive “coarse-to-fine” sampling strategy would enable agile and fast sensing and detection procedures which could in return enhance the applicability and competitiveness of image–processing-based diagnostics methods. This is the objective of current investigations; an account of this is left for future work.

Another interesting avenue for future work consists of testing the approach in the context of heterogeneous materials, in which the material properties could potentially feature large deviations from an ideal case even far from the to-be-detected anomalies. This type of scenario would ultimately testify to the agnostic properties of the approach, as in those cases the ability to triangulate anomalies without precognition of the mechanical properties of the medium would fully manifest.

References

  • Staszewski et al. [2004] W.J. Staszewski, C. Boller, and G. Tomlinson. Health monitoring of aerospace structures: Smart sensors and signal processing. Wiley & Sons, 2004.
  • Rose [2002] J.L. Rose. A baseline and vision of ultrasonic guided wave inspection potential. Journal of Pressure Vessel Technology, 124(3):273–282, 2002.
  • Ihn and Chang [2008] J-.B. Ihn and F-.K. Chang. Pitch-catch active sensing methods in structural health monitoring for aircraft structures. Structural Health Monitoring, 7(1):5–19, 2008.
  • Flynn et al. [2011] E. B. Flynn, M. D. Todd, P. D. Wilcox, B. W. Drinkwater, and A. J. Croxford. Maximum-likelihood estimation of damage location in guided-wave structural health monitoring. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science, 467(2133):2575–2596, 2011.
  • Michaels et al. [2005] T. E. Michaels, J. E. Michaels, B. Mi, and M. Ruzzene. Damage detection in plate structures using sparse ultrasonic transducer arrays and acoustic wavefield imaging. In AIP Conference Proceedings, volume 760, page 938, 2005.
  • Liu et al. [2012] L. Liu, S. Liu, and F.-G. Yuan. Damage localization using a power-efficient distributed on-board signal processing algorithm in a wireless sensor network. Smart Materials and Structures, 21(2):025005, 2012.
  • Lu et al. [2008] Y. Lu, X. Wang, J. Tang, and Y. Ding. Damage detection using piezoelectric transducers and the Lamb wave approach: II. Robust and quantitative decision making. Smart Materials and Structures, 17(2):025034, 2008.
  • Wang and Yuan [2009] Q. Wang and S. Yuan. Baseline-free imaging method based on new PZT sensor arrangements. Journal of Intelligent Material Systems and Structures, 20(14):1663–1673, 2009.
  • Prada and Fink [1994] C. Prada and M. Fink. Eigenmodes of the time reversal operator: A solution to selective focusing in multiple-target media. Wave Motion, 20(2):151–163, 1994.
  • Foroozan and Asif [2011] F. Foroozan and A. Asif. Time reversal based active array source localization. IEEE Transactions on Signal Processing, 59(6):2655–2668, 2011.
  • Kessler et al. [2002] S.S. Kessler, S.M. Spearing, and C. Soutis. Damage detection in composite materials using lamb wave methods. Smart Materials and Structures, 11(2):269, 2002.
  • Kirikera et al. [2011] G.R. Kirikera, O. Balogun, and S. Krishnaswamy. Adaptive Fiber Bragg Grating Sensor Network for Structural Health Monitoring: Applications to Impact Monitoring. Structural Health Monitoring, 10(1):5–16, 2011.
  • Farrar and Worden [2013] C. Farrar and K. Worden. Structural health monitoring: A machine learning perspective. Wiley & Sons, 2013.
  • Zhongqing and Ye [2004] S. Zhongqing and L. Ye. Lamb wave-based quantitative identification of delamination in CF/EP composite structures using artificial neural algorithm. Composite Structures, 66(1):627 – 637, 2004.
  • Lu and Michaels [2007] Y. Lu and J. E. Michaels. Ultrasonic Signal Decomposition via Matching Pursuit with an Adaptive and Interpolated Dictionary. In D. O. Thompson and D. E. Chimenti, editors, Review of Progress in Quantitative Nondestructive Evaluation, volume 894 of American Institute of Physics Conference Series, pages 579–586, 2007.
  • Das et al. [2005] S. Das, A. Papandreou-Suppappola, X. Zhou, and A. Chattopadhyay. On the use of the matching pursuit decomposition signal processing technique for structural health monitoring. Smart Structures and Materials 2005: Smart Structures and Integrated Systems, 5764(1):583–594, 2005.
  • Das et al. [2009] S. Das, I. Kyriakides, A. Chattopadhyay, and A. Papandreou-Suppappola. Monte carlo matching pursuit decomposition method for damage quantification in composite structures. Journal of Intelligent Material Systems and Structures, 20(6):647–658, 2009.
  • Mallat and Zhang [1993] S.G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Trans Signal Processing, 41(12):3397 –3415, 1993.
  • Das et al. [2007] S. Das, A. N. Srivastava, and A. Chattopadhyay. Classification of damage signatures in composite plates using one-class SVMs. In Proc. IEEE Aerospace Conference, pages 1 –19, 2007.
  • Azimipanah and ShahbazPanahi [2013] A. Azimipanah and S. ShahbazPanahi. Experimental results of compressive sensing based imaging in ultrasonic non-destructive testing. In 2013 IEEE 5th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pages 336–339, 2013.
  • Sharma et al. [2006] V. Sharma, S. Hanagud, and M. Ruzzene. Damage index estimation in beams and plates using laser vibrometry. AIAA Journal, 44(4):919–923, 2006.
  • Michaels et al. [2011] T.E. Michaels, J.E. Michaels, and M. Ruzzene. Frequency-wavenumber domain analysis of guided wavefields. Ultrasonics, 51(4):452 – 466, 2011.
  • Basri and Chiu [2004] R. Basri and W. K. Chiu. Numerical analysis on the interaction of guided Lamb waves with a local elastic stiffness reduction in quasi-isotropic composite plate structures. Composite Structures, 66:87–99, 2004.
  • Alleyne and Cawley [1991] D. Alleyne and P. Cawley.

    A two-dimensional Fourier transform method for the measurement of propagating multimode signals.

    Journal of the Acoustical Society of America, 89:1159–1168, 1991.
  • Ruzzene [2007] M. Ruzzene. Frequency-wavenumber domain filtering for improved damage visualization. Smart Materials and Structures, 16(6):2116, 2007.
  • Sohn et al. [2011] H. Sohn, D. Dutta, J.Y. Yang, M. DeSimio, S. Olson, and E. Swenson. Automated detection of delamination and disbond from wavefield images obtained using a scanning laser vibrometer. Smart Materials and Structures, 20(4):045017, 2011.
  • An et al. [2013] Y.-K. An, B. Park, and H. Sohn. Complete noncontact laser ultrasonic imaging for automated crack visualization in a plate. Smart Materials and Structures, 22(2):025022, 2013.
  • Itti et al. [1998] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1998.
  • Itti and Koch [2001] L. Itti and C. Koch. Computational modelling of visual attention. Nature, 2, March 2001.
  • Yan et al. [2010] J. Yan, M. Zhu, H. Liu, and Y. Liu. Visual saliency detection via sparsity pursuit. IEEE Signal Proc. Letters, 17(8), 2010.
  • Shen and Wu [2012] X. Shen and Y. Wu. A unified approach to salient object detection via low rank matrix recovery. In

    Proc. Computer Vision and Pattern Recognition

    , 2012.
  • Patcha and Park [2007] A. Patcha and J. Park. An overview of anomaly detection techniques: Existing solutions and latest technological trends. Computer Networks, 51(12), 2007.
  • Chandola et al. [2009] V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing Surveys, 41(3), 2009.
  • Chen et al. [2001] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Rev., 43(1):129–159, 2001.
  • Candes et al. [2006] E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, February 2006.
  • Donoho [2006] D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, April 2006.
  • Candès and Tao [2006] E. J. Candès and T. Tao. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Transactions on Information Theory, 52(12):5406–5425, December 2006.
  • Haupt and Nowak [2006] J. Haupt and R. Nowak. Signal reconstruction from noisy random projections. IEEE Transactions on Information Theory, 52(9):4036–4048, September 2006.
  • Tropp [2004] J. A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231–2242, 2004.
  • Tropp [2006] J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals in noise. IEEE Transactions on Information Theory, 52(3):1030–1051, 2006.
  • Bruckstein et al. [2009] A.M. Bruckstein, D.L. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev., 51(1):34–81, 2009.
  • Gonella and Haupt [2013] S. Gonella and J.D. Haupt.

    Automated defect localization via low rank plus outlier modeling of propagating wavefield data.

    IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 60(12):2553–2565, 2013.
  • Olshausen and Field [1997] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37:3311–3325, 1997.
  • Lewicki et al. [1998] M. S. Lewicki, T. J. Sejnowski, and H. Hughes. Learning overcomplete representations. Neural Computation, 12:337–365, 1998.
  • Kreutz-Delgado et al. [2003] K. Kreutz-Delgado, J. F. Murray, B. D. Rao, K. Engan, T.-W. Lee, and T. J. Sejnowski. Dictionary learning algorithms for sparse representation. Neural computation, 15(2):349–396, 2003.
  • Aharon et al. [2006] M. Aharon, M. Elad, and A. Bruckstein. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311–4322, 2006.
  • Mairal et al. [2010] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res., 11:19–60, 2010.
  • Mairal et al. [2008] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Transactions on Image Processing, 17(1):53–69, 2008.
  • Peyré [2009] G. Peyré. Sparse modeling of textures. Journal of Mathematical Imaging and Vision, 34(1):17–31, 2009.
  • Ramirez et al. [2010] I. Ramirez, P. Sprechmann, and G. Sapiro. Classification and clustering via dictionary learning with structured incoherence and shared features. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pages 3501–3508, 2010.
  • Zubair et al. [2013] S. Zubair, F. Yan, and W. Wang. Dictionary learning based sparse coefficients for audio classification with max and average pooling. Digital Signal Processing, 23(3):960 – 970, 2013.
  • Rambhatla and Haupt [2013] S. Rambhatla and J. Haupt. Semi-blind source separation via sparse representations and online dictionary learning. In Proc. Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, November 2013.
  • Li [2012] Y. Li. Dictionary learning based multitask image restoration. In Proc. Intl. Conf. on Image and Signal Processing, pages 364–368, 2012.
  • Mairal et al. [2009] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In Proc. ICML, 2009.