Wavelet methods for shape perception in electro-sensing

10/10/2013 ∙ by Habib Ammari, et al. ∙ Cole Normale Suprieure 0

This paper aims at presenting a new approach to the electro-sensing problem using wavelets. It provides an efficient algorithm for recognizing the shape of a target from micro-electrical impedance measurements. Stability and resolution capabilities of the proposed algorithm are quantified in numerical simulations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

page 14

page 15

page 17

page 18

Code Repositories

SIES

Shape identification in electro-sensing


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The aim of electro-sensing is to learn geometric parameters and material compositions of a target via electrical measurements. In this paper, we suppose that the target is composed of a homogeneous material with a known electrical property and focus uniquely on the problem of geometry. Geometric identification of a target may mean to recognize it from a collection of known shapes (up to rigid transformations and scaling), or to reconstruct its boundary.

In the recent work [2], an approach based on polynomial basis has been proposed for the far-field measurement system. Using Taylor expansion of the Green functions, on one hand, the geometric information of the target can be coded in some features, which are the action of a boundary integral operator on homogeneous polynomials of different orders, and on the other hand the measurement system is separated into a linear operator relating the features to the data. The features are then extracted by solving a linear inverse problem and can be used to identify the target in a database. Unlike other methods (in electrical impedance tomography [9]) which attempt to reconstruct directly the target, this approach is more effective and computationally efficient in the applications of shape recognition.

From a more general point of view, the problem is to know, given the physical configuration of the measurement system, how to choose the basis for representation of features and how to extract them from data for identification. The ill-posedness in electro-sensing is inherent to the diffusion character of the currents and cannot be removed by a change of basis. Nonetheless, the particularity of a basis can modify totally the way in which information is organized in the feature and the manner in which it should be reconstructed.

In this paper we present a new approach for electro-sensing with the near-field measurement system using the wavelet basis. Unlike the far-field measurement configuration which is known to be exponentially unstable, the near-field measurement system is much more stable and the data reside in a higher dimensional subspace, hence one can expect to reconstruct more information of the target from the data. With the near-field measurement system, the new approach based on wavelet presents more advantages than the approach based on polynomials, for the reason that the wavelet representation of the features is local and sparse, and reflects directly the geometric character of a target. Furthermore, the features can be effectively reconstructed by seeking a sparse solution via minimization, and the boundary of the target can be read off from the features, giving a new high resolution imaging algorithm which is robust to noise.

This paper is organized as follows. In section 2 we give a mathematical formulation and present an abstract framework for electro-sensing. We introduce the basis of representation and deduce a linear system by separating the features from the measurement system. The question of the stability of the measurement system is discussed. In section 3 we summarize essential results based on polynomial basis developed in [2]. The wavelet basis and new imaging algorithms, which are the main contributions of this paper, are presented in section 4, where we discuss some important properties of the wavelet representation and formulate the minimization problem for the reconstruction of the features. Numerical results are given in section 5, and followed by some discussions in section 6. The paper ends with some concluding remarks.

2 Modelling of the electro-sensing problem

Let be an open bounded domain of -boundary that we want to characterize via electro-sensing. We suppose that is centered around the origin and has size , furthermore there exists an a priori open bounded domain such that is compactly contained in the convex envelope of (in practice, both the center of and

can be estimated using some

location search algorithm [5, 17]). We also assume that the positive conductivity number of is known, and the background conductivity is . We denote by .

A measurement system consists of sources , and receivers disposed on . The potential field generated by the point source is the solution to the equation

(1)

where is the indicator function of , and is the background potential field. Similarly, we denote .

The difference is the perturbation of potential field due to the presence of in the background, and evaluated at the receiver it gives the measurement

(2)

which builds the multistatic response matrix by varying the source and receiver pair. In this section, we show that with the help of a bilinear form, the problem can be formulated through a linear system relating the data and the features of .

2.1 Layer potentials and representation of the solution

Recall the single layer potential :

(3)

and the Neumann-Poincaré operator :

(4)

where

is the outward normal vector at

. is a compact operator on for a domain and has a discrete spectrum in the interval . Therefore, the operator is invertible on for the constant

(5)

Moreover, its inverse is also bounded. An important relation is the jump formula:

(6)

where denotes the normal derivative across the boundary and indicate the limits of a function from outside and inside of the boundary, respectively. Details on these operators can be found in [7].

With the help of these operators, the solution of (1) can be represented as

(7)

with satisfying on . Therefore, the perturbed field can be expressed as

(8)

We assume in the sequel for all which is necessary for to be well defined.

2.2 Bilinear form

We denote by , for , the standard Sobolev spaces and introduce the bilinear form defined as follows

(9)

By the boundeness of and of the trace operator, it follows that

and hence, is bounded. Now, we observe from (2) and (8) that can be rewritten as

(10)

2.2.1 Characterization of by

One of the interests of is that it determines uniquely the domain , as stated in the following result. Let be open bounded domains with -boundaries with the same conductivity , then if and only if their associated bilinear forms are equal:

(11)
Proof.

Clearly implies that . Now suppose . There exist a point . Let be a small open neighborhood of such that .

Let verifying that the support of is included in and over . Such a can be constructed for example in the following way. Let be a compactly supported function, whose support is included in and in a small neighborhood of . Then the function

satisfies the required conditions: its support is included in and (in a neighborhood of , coincides with the function , whose gradient is ).

We set , which is not identically zero because . Consequently, by the density of the image of the trace operator in , there exist such that

On the other hand, over because over . So , which implies . ∎

2.3 Representation of

As suggested by Proposition 2.2.1, all information about is contained in . This motivates us to represent in a discrete form (features of ) that will be estimated from the data.

2.3.1 Basis of representation

Let be a Schauder basis of . We denote by the finite dimensional subspace spanned by and the orthogonal projector onto :

(12)

We require the following conditions on the basis :

  • For any

    (13)
  • There exists a function such that for and some , we have . Furthermore, it holds for any

    (14)

    with the constant being independent of and .

2.3.2 Polynomial basis

The first example of is the homogeneous polynomial basis. The property (13) is a direct consequence of the following classical result (see Appendix A for its proof): Let . The family of polynomials is complete in for . Estimate (14) can be obtained using an equivalent result of Legendre polynomials established in [11], with .

2.3.3 Wavelet basis

Another example of is the wavelet basis. Let be a one-dimensional orthonormal scaling function generating a multi-resolution analysis [19], and let be a wavelet which is orthogonal to and has

zero moments. We construct the two-dimensional scaling function

by tensor product as

, and similarly we construct the wavelets for by tensor product of [19]. We denote by

Then for constitute an orthonormal basis of . Particularly, the Daubechies wavelet of order 6 (with 6 zero moments) fulfills the conditions above [15].

Let be the approximation space spanned by , and be the orthogonal projector onto :

(15)

The property (13) follows from the fact that the converges to in for any (see [21, Theorem 6, Chapter 2]).

The wavelet basis introduced above verifies the polynomial exactness of order [19] (, the polynomials of order belong to ) and for . Therefore, we have the following result (see [14, Corollary 3.4.1]): For any

(16)

Then the estimate (14) is fulfilled with . By abuse of notation, throughout this paper, we still use (with ) to denote the projection for the wavelet basis.

2.3.4 Truncation of

Thanks to the boundedness of and property (13), one can verify easily that for any

(17)

with the truncation error decaying to zero as . Using the approximation property (14), a bound on the truncation error can be established. Suppose that the basis fulfills the conditions (13) and (14). Let with the constants and being those of estimate (14). Then for any

(18)

where the constant depends only on and .

Proof.

By the triangle inequality

(19)

Using the boundedness of on the first term of the right-hand side, we get

On one hand, one can apply (14) on with the constant verifying . On the other hand, for any and we have in by (13). Therefore, we obtain that

Similarly, for the second term of the right-hand side in (19) we get

which holds for any for the constant of (14). Combining these two terms yields the desired result. ∎

2.3.5 Coefficient matrix

We define the coefficient matrix as follows

(20)

which represents under the basis up to the order . We denote by the coefficient vector of , ,

(21)

and similarly for . Then restricted on can be put into the following matrix form:

(22)

Thanks to property (13) of the basis, there is a one-to-one mapping between and its coefficient matrix as . Hence the domain is also uniquely determined from when , as a consequence of Proposition 2.2.1.

2.4 Linear system

In (10), the Green functions and play the role of the measurement system while the information about is contained in the operator . This motivates us to separate these two parts and extract information about from the data .

By removing a small neighborhood of and if necessary, we can always assume that does not contain any source or receiver, in such a way that and restricted on become , and hence can be represented using the basis (note that since , removing the singularity does not affect which depends only on the value of the Green functions on ).

We denote in the sequel the (column) coefficient vectors of and , respectively. From (10), (17) and (22) one can write

(23)

with being the truncation error of order which can be controlled using Proposition 2.3.4. We introduce the matrices of the measurement system

(24)

as well as the linear operator :

(25)

Then (23) can be put into a matrix product form:

(26)

with being the matrix of truncation error. Further, suppose that is contaminated by some measurement noise , , the

-th coefficient follows the normal distribution

with being the noise level. Using the bound (18) of the truncation error, we can assume that for a large

(27)

uniformly in all and , so that can be neglected compared to the noise. Finally, we obtain a linear system relating the coefficient matrix to the data

(28)

and the objective is then to estimate from by solving (28).

2.4.1 Measurement systems and stability

The stability of the operator is inherent to the spatial distribution of sources and receivers that we suppose to be coincident in what follows. The far-field measurement system (Figure 1 (a)) is the situation when the characteristic distance between transmitters and the boundary of the target is much larger than the size of the target. On the contrary, in the near-field internal measurement system (Figure 1 (b)) which is used in micro-EIT [18, 20], we have and the transmitters can be placed “inside” the target. Other types of far-field measurements exist; see [1, 4].

(a) Far-field measurement
(b) Near-field measurement
Figure 1: Far-field and near-field measurement systems with a flower-shaped target

. (a): 360 coincided sources and receivers (cross) are uniformly distributed on a circle including the target. (b): 15

15 coincided sources and receivers (dots) are uniformly distributed inside , and the minimal distance to is . The red cross in both figures marks the center of .

Note that a singular value of

is the product of a pair of singular values of and . In Figure 2 we compare the distribution of the singular values of the operator (computed with the Daubechies wavelet of order 6 as ) corresponding to the systems of Figure 1. One can notice the substantial difference between these two systems: the singular values of the far-field system decays very fast, revealing the exponential ill-posedness of the associated inverse problem [2, 3, 6]. On the contrary, for the near-field system, the stability of is considerably improved. This can be explained by the decay of the wavelet coefficients of the Green function. In fact, is smooth away from , therefore most rows (populated by detail wavelet coefficients) in have tiny numerical values which make the matrix ill-conditioned. More precisely, the following result holds. We refer to Appendix B) for its proof. Let be a compact domain. Suppose that and denote by the distance between and . If the wavelet has zero moments, then as :

(29)

where .

Figure 2: First singular values of computed with a wavelet basis. The curve with dashed and solid line correspond respectively to the far-field and the near-field measurement systems of Figure 1.

As a consequence of the stability, the data of the far-field measurement reside in a low dimensional subspace while those of the near-field measurement reside in a high dimensional subspace. Therefore, the type of estimation of and afortiori the type of basis for the implementation of estimator, should be adapted to the physical configuration of the measurement system. In the next sections, we show that in the case of the far-field measurement, the polynomial basis with linear estimation is well suited, while for the near-field measurement case it is possible to use a wavelet basis which creates a high dimensional but sparse matrix , and to reconstruct more information by a nonlinear estimator.

3 Polynomial basis and linear estimation

Under the homogeneous polynomial basis, the coefficients in are given as follows:

(30)

with being multi-indices. These are also referred as Generalized Polarization Tensors (GPTs) in the perturbation theory of small inclusions [5, 7]. The coefficient vectors and in (23) are now obtained by the Taylor expansion of the Green functions:

(31)

Moreover, the truncation error can be expressed explicitly, and in case of the far-field measurement it decays as [2] (with being the size of and being the radius of the measurement circle), which is far better than the previous bound (18).

Expression (31) can be simplified to the matrix product form (26) by recombining all of order and using coefficients of harmonic polynomials. The resulting linear operator is injective for the far-field system (Figure 1 (a)) having transmitters, and its singular value decays as ; see [2, 3].

3.1 Linear estimator of

Due to the global character of the polynomials, in general the coefficient matrix is full. On the other hand, the fast decay of truncation error under the polynomial basis suggests that the energy of is concentrated in the low order coefficients. Therefore, the simple truncation in the reconstruction order provides an effective regularization, and the first order coefficients can be estimated by solving the least-squares problem

(32)

where denotes the Frobenius norm. The following bound on the error of the estimation can be established: for ,

(33)

As a consequence, the maximum resolving order is bounded by [2]

(34)

Hence, the far-field measurement has a very limited resolution. However, the first few orders of coefficients contain important geometric information of the shape (the first order tells how a target resembles an equivalent ellipse), and can be used to construct shape descriptors for the identification of shapes. We refer the reader to [2] for detailed numerical results.

4 Wavelet basis and nonlinear estimation

In this section we use the wavelet (more exactly, the scaling function ) introduced in section 2.3.1 for the representation and the reconstruction of . This yields a sparse and local representation, and makes the wavelets an appropriate choice of basis for the near-field measurement which allows the reconstruction via minimization and the visual perception of .

4.1 Wavelet coefficient matrix

For the wavelet basis, we use the scale number (in place of as in section 2.3) to denote the truncation order. The coefficient matrix under the wavelet basis contains the approximation coefficients

(35)

so that with being the orthogonal projector onto the approximation space as introduced in section 2.3.3, and are the coefficient vectors

(36)

The matrix of a flower-shaped target is shown in Figure 3 (a).

4.2 Properties of the wavelet coefficient matrix

In the following we discuss some important properties of the wavelet coefficient matrix.

Bounds of matrix norm

Proposition 4.2 establishes bounds on the spectral norm of showing that diverges as . The proof is based on the inverse estimate and the polynomial exactness of the wavelet basis, and is given in Appendix C. When , the spectral norm of the matrix is bounded by

(37)

with being some constants independent of .

Sparsity

From the definition of we observe that is non-zero only when the support of both wavelets intersect . Therefore, the non-zero coefficients of carry geometric information on . Moreover, is a sparse matrix. In fact, when the scale there are wavelets contributing to , so the dimension of is . On the other hand, the number of wavelets intersecting is . Hence, the number of non zero coefficients is about and the sparsity of is asymptotically .

Band diagonal structure

Numerical computations show that the pattern of non-zeros in has a band diagonal structure. The largest coefficients appear around several principal band diagonals in a regular manner that reflects different situations of interaction between wavelets via the bilinear form , as shown in Figure 3 (a). We notice that the major band diagonals describe the interaction between a and its immediate neighbors (the width of the band is proportional to the size of the support of ). In particular, the main diagonal corresponds to the case , while the other band diagonals describe the interactions between other non-overlapping and .

(a)
(b) Mask
Figure 3: (a): Wavelet coefficient matrix of a flower-shaped target with computed using Daubechies wavelet of order 6. Image is the amplitude of coefficients in logarithmic scale. Only the first of largest coefficients (in magnitude) are shown, and the relative error of the -term approximation is . The number of wavelets contributing to is . (b): Diagonal mask for the estimation of (11 band diagonals). There are non-zeros in and the relative error of approximation of with this mask is .
Localization of

Numerical evidence further suggests that the strongest coefficients appear around the major band diagonal of , , when and are close. Therefore, a large value in the diagonal coefficient indicates the presence of in the support of . We plot the -th column of in Figure 4, which correspond to the interaction between and all others . We call this the localization property of . It indicates that the operator can loosely preserve the essential support of a localized function. The next proposition gives a qualitative explanation when is the unit disk.

Let be a unit disk. As we have

(38)
Proof.

For being a unit disk, one has [7]:

By simple manipulations one can deduce that

(39)

Therefore, the bilinear form is reduced to

(40)

Taking and as the intersection between and the support of is well approximated by a line segment. By a change of variables, it follows that

(41)

Similarly, one has

(42)

Substituting (41) and (42) into (40) yields (38). ∎

Hence, as , the coefficients of the main diagonal behave like , and dominate the other band diagonals that behave like .

Figure 4: Amplitude of with (or equivalently the 815-th row in Figure 3 (a)). The support of the wavelet intersects . (a): View as an image with each pixel corresponding to one . (b): Amplitude as a function of the position index . The highest peak appears around .

4.3 Wavelet based imaging algorithms

The localization property of can be used to visualize the target . A simple algorithm, called imaging by diagonal, consists in taking the diagonal of (, the coefficients ) and reshaping it to a 2D image. Then the boundary of can be read off from the image.

A drawback of this method is that the generated boundary has low resolution. In fact, any touching the boundary is susceptible to yield a numerically non-negligible value of . Hence, larger is the support of the wavelet, more are the wavelets intersecting and lower is the resolution. An improved method consists in searching for each index , the index maximizing the interaction between :

(43)

and then accumulating for the index . We name this method imaging by maximum. It is higher in resolution since the effect of the wavelets touching merely is absorbed by their closest neighbors lying on . The procedure is summarized in Algorithm 1. Figures 5 (a) and (b) show a comparison between these two algorithms.

  Input: the matrix of an unknown shape , a zero-valued matrix .
  for  do
     1.
     2. ;
  end for
  Output: the 2D image .
Algorithm 1 Imaging by maximum
Figure 5: Images obtained by the imaging algorithms 4.3 applied on the true coefficients with . (a): Imaging by maximum. (b): Imaging by diagonal.

4.4 Reconstruction of by minimization

As the scale decreases, the dimension of increases rapidly as . On the other hand, the band diagonal structure of shows that the largest coefficients distribute on the major band diagonals, which is an important a priori information allowing to reduce considerably the dimension of the unknown to be reconstructed.

For this, we fix a priori and assume that the coefficient can be neglected when . We construct accordingly a band diagonal mask taking values or by choosing proportional to the support size of the wavelet. Remark that the mask constructed in this way is not adaptive and does not contain any information about the boundary of the target. Figure 3 (b) shows a mask with .

Given the high dimension of and its sparsity, we seek a sparse solution by solving the minimization problem as follows [19, 13, 10]:

(44)

where is the band diagonal mask (Figure 3 (b)), denotes the termwise multiplication, is the regularization parameter, and is the reweighted norm. We set the weight in such a way that the operator is normalized columnwisely. The constant is determined by the universal threshold [19] (tuned manually to achieve the best result if necessary):

(45)

with being the number of non-zero values in . Problem (44) admits a unique sparse solution [19] under appropriate conditions, and can be solved numerically via efficient algorithms; see for example [8].

5 Numerical experiments

In this section we present some numerical results to illustrate the efficiency of the wavelet imaging algorithm proposed in section 4.3. The wavelet used here is the Daubechies wavelet of order 6. We consider a near-field measurement system (Figure 1 (b)) with uniformly distributed sources and receivers.

5.1 Parameter settings

We set the conductivity constant and use a flower-shaped target as . The whole procedure of the experiment is as follows. First, the data are simulated by evaluating (8) for all sources and receivers

of the measurement system. A white noise of level

(46)

is added to obtain the noisy data , with being the percentage of noise in data. Thereafter the minimization problem (44) is solved with the parameters and methods described in section 4.4. Finally, from the reconstructed coefficients , we apply Algorithm 1 to obtain a pixelized image of .

The vectors in (23) contain the wavelet coefficients of the Green functions and respectively. They are computed by first sampling and on a fine Cartesian lattice of sampling step on (the singularity point is numerically smoothed) and then applying the discrete fast wavelet transform on these samples to obtain the coefficients at the desired scale.

5.2 Results of the imaging algorithm

Figures 6 (a, b) show the results of imaging obtained at the scale with different noise levels. It can be seen that even in a highly noisy environment () the boundary of can still be correctly located.

Remark that for the near-field internal measurement system Figure 1 (b), one can obtain an image of directly from the data (in fact, being defined by (8) has large amplitude if and/or is close to ). Nonetheless, such a direct imaging method is far less robust to noise than the wavelet based algorithm and its resolution is limited by the density of the transmitters, as shown in Figures 6 (c, d).

Figure 6: (a, b): Images obtained by Algorithm 1 applied on reconstructed from data with noise, . (c, d): Images obtained directly from data . (a, c): Noise level , (b, d): .

In Figure 7 the same experiments of imaging with noisy data were conducted at the scale . We notice that the images returned by Algorithm 1 have dimension , which is much higher than that of the grid of transmitters (). Furthermore, the results remain robust up to the noise level

. These confirm the super-resolution character of the wavelet based imaging algorithm.

Figure 7: Same experiments as Figures 6 (a, b) at the scale . (a): Imaging by maximum with of noise, (b): with of noise.

6 Discussion

6.1 Effect of the conductivity constant

The constant as defined in section 2 is actually the ratio between the conductivity of the target and the background (set to 1 in this paper). Further numerical experiments suggest that the performance of Algorithm 1 depend on : the results may deteriorate when becomes large ( ). This can be explained easily for the case of a unit disk. In fact, it can be seen from (40) that the ratio between the overlapped and non overlapped (in terms of the functions ) coefficients of varies with as . Hence, the localization property (section 4.2) becomes more (resp. less) pronounced when (resp. ), and the imaging algorithm is impacted accordingly. Nonetheless, we note also that when , the target becomes indistinguishable from the background and the measured data decreases to zero (without considering the noise). These observations suggest that in practice there may exist some numerical ranges for and for the noise level on which the imaging algorithm is more or less effective.

6.2 Representation with the wavelet

In section 4 we used only the scaling function for the approximation and the representation of , while it is also possible to use the wavelet functions together with to represent and obtain another form of . More precisely, let be the detail space spanned by and be the orthogonal projectors onto . For any two scales , it holds

(47)

where we used the fact that , and are respectively the coefficient vectors of and under the basis . The coefficient matrix now takes the form

(48)

where and denote the block matrices corresponding to the terms marked by braces in (6.2) respectively. In particular, contains the detail coefficients of type with , while contains the approximation coefficients . Remark that in the case , is reduced to which is identical to the coefficient matrix defined in (35).

Moreover, one can easily prove (using the conjugated filters) that for any , and , regarded as vectors, are equivalent up to an unitary transform. Therefore the choice of the scale is not important since is equivalent to for any , and their dimensions are asymptotically equal as for a fixed domain .

A natural question is to know whether the equivalent representation with is more sparse than . In Figure 8 we plot the decay of the coefficients of the four block matrices in with . It can be seen that for the numerical range considered here, the detail coefficients have similar decay as the approximation coefficients. In fact, like , the main reason for the sparsity of the detail coefficients is the intersection between the support of wavelets and the boundary , and for the same reason the localization property (, Proposition 4.2) remains valid for the wavelets . Hence the representation has a similar sparsity as and does not present substantial advantages for the applications considered in this paper.

Figure 8: Coefficients (absolute value) of with in decreasing order in the logarithmic scale. (a): All coefficients. (b): Zoom-in on the first coefficients. The different curves in the figures represent: