# Wavelet methods for shape perception in electro-sensing

This paper aims at presenting a new approach to the electro-sensing problem using wavelets. It provides an efficient algorithm for recognizing the shape of a target from micro-electrical impedance measurements. Stability and resolution capabilities of the proposed algorithm are quantified in numerical simulations.

## Authors

• 7 publications
• 29 publications
• 3 publications
• 37 publications
• ### Time-domain multiscale shape identification in electro-sensing

This paper presents premier and innovative time-domain multi-scale metho...
09/12/2014 ∙ by Habib Ammari, et al. ∙ 0

• ### Multi-scale classification for electro-sensing

This paper introduces premier and innovative (real-time) multi-scale met...
06/08/2020 ∙ by Lorenzo Baldassari, et al. ∙ 0

• ### Two-pixel polarimetric camera by compressive sensing

We propose an original concept of compressive sensing (CS) polarimetric ...
06/16/2017 ∙ by Julien Fade, et al. ∙ 0

• ### Transmitter Discovery through Radio-Visual Probabilistic Active Sensing

Multi-modal Probabilistic Active Sensing (MMPAS) uses sensor fusion and ...
03/27/2021 ∙ by Luca Varotto, et al. ∙ 0

• ### Shape Sensing of Variable Stiffness Soft Robots using Electrical Impedance Tomography

Soft robotic systems offer benefits over traditional rigid systems throu...
04/04/2019 ∙ by James Avery, et al. ∙ 0

• ### Securing Isosceles Triangular Formations under Heterogeneous Sensing and Mixed Constraints

This paper focuses on securing a triangular shape (up to translation) fo...
01/02/2021 ∙ by Nelson P. K. Chan, et al. ∙ 0

• ### Structured Sparse Convolutional Autoencoder

This paper aims to improve the feature learning in Convolutional Network...
04/17/2016 ∙ by Ehsan Hosseini-Asl, et al. ∙ 0

## Code Repositories

### SIES

Shape identification in electro-sensing

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The aim of electro-sensing is to learn geometric parameters and material compositions of a target via electrical measurements. In this paper, we suppose that the target is composed of a homogeneous material with a known electrical property and focus uniquely on the problem of geometry. Geometric identification of a target may mean to recognize it from a collection of known shapes (up to rigid transformations and scaling), or to reconstruct its boundary.

In the recent work [2], an approach based on polynomial basis has been proposed for the far-field measurement system. Using Taylor expansion of the Green functions, on one hand, the geometric information of the target can be coded in some features, which are the action of a boundary integral operator on homogeneous polynomials of different orders, and on the other hand the measurement system is separated into a linear operator relating the features to the data. The features are then extracted by solving a linear inverse problem and can be used to identify the target in a database. Unlike other methods (in electrical impedance tomography [9]) which attempt to reconstruct directly the target, this approach is more effective and computationally efficient in the applications of shape recognition.

From a more general point of view, the problem is to know, given the physical configuration of the measurement system, how to choose the basis for representation of features and how to extract them from data for identification. The ill-posedness in electro-sensing is inherent to the diffusion character of the currents and cannot be removed by a change of basis. Nonetheless, the particularity of a basis can modify totally the way in which information is organized in the feature and the manner in which it should be reconstructed.

In this paper we present a new approach for electro-sensing with the near-field measurement system using the wavelet basis. Unlike the far-field measurement configuration which is known to be exponentially unstable, the near-field measurement system is much more stable and the data reside in a higher dimensional subspace, hence one can expect to reconstruct more information of the target from the data. With the near-field measurement system, the new approach based on wavelet presents more advantages than the approach based on polynomials, for the reason that the wavelet representation of the features is local and sparse, and reflects directly the geometric character of a target. Furthermore, the features can be effectively reconstructed by seeking a sparse solution via minimization, and the boundary of the target can be read off from the features, giving a new high resolution imaging algorithm which is robust to noise.

This paper is organized as follows. In section 2 we give a mathematical formulation and present an abstract framework for electro-sensing. We introduce the basis of representation and deduce a linear system by separating the features from the measurement system. The question of the stability of the measurement system is discussed. In section 3 we summarize essential results based on polynomial basis developed in [2]. The wavelet basis and new imaging algorithms, which are the main contributions of this paper, are presented in section 4, where we discuss some important properties of the wavelet representation and formulate the minimization problem for the reconstruction of the features. Numerical results are given in section 5, and followed by some discussions in section 6. The paper ends with some concluding remarks.

## 2 Modelling of the electro-sensing problem

Let be an open bounded domain of -boundary that we want to characterize via electro-sensing. We suppose that is centered around the origin and has size , furthermore there exists an a priori open bounded domain such that is compactly contained in the convex envelope of (in practice, both the center of and

can be estimated using some

location search algorithm [5, 17]). We also assume that the positive conductivity number of is known, and the background conductivity is . We denote by .

A measurement system consists of sources , and receivers disposed on . The potential field generated by the point source is the solution to the equation

 {∇.((1+(\kcst−1)χD)∇us)=δxs  in \R2,us−\Gammas=O(\absx−1)  as \absx→∞, (1)

where is the indicator function of , and is the background potential field. Similarly, we denote .

The difference is the perturbation of potential field due to the presence of in the background, and evaluated at the receiver it gives the measurement

 Vsr=us(yr)−\Gammas(yr), (2)

which builds the multistatic response matrix by varying the source and receiver pair. In this section, we show that with the help of a bilinear form, the problem can be formulated through a linear system relating the data and the features of .

### 2.1 Layer potentials and representation of the solution

Recall the single layer potential :

 \SglfDϕ(x)=∫\pDΓ(x−y)ϕ(y)ds(y), (3)

and the Neumann-Poincaré operator :

 \KstarfDϕ(x)=12π∫\pD\seqx−y,νx\absx−y2ϕ(y)ds(y), (4)

where

is the outward normal vector at

. is a compact operator on for a domain and has a discrete spectrum in the interval . Therefore, the operator is invertible on for the constant

 λ=\kcst+12(\kcst−1). (5)

Moreover, its inverse is also bounded. An important relation is the jump formula:

 \ddn\SglfDϕ∣∣±=\Paren±12I+\KstarD[ϕ], (6)

where denotes the normal derivative across the boundary and indicate the limits of a function from outside and inside of the boundary, respectively. Details on these operators can be found in [7].

With the help of these operators, the solution of (1) can be represented as

 us(x)=\Gammas(x)+\SglfDϕ(x) (7)

with satisfying on . Therefore, the perturbed field can be expressed as

 (us−\Gammas)(x)=∫\pDΓ(x−y)\lKstarifD\ddn\Gammas(y)ds(y). (8)

We assume in the sequel for all which is necessary for to be well defined.

### 2.2 Bilinear form \TauD

We denote by , for , the standard Sobolev spaces and introduce the bilinear form defined as follows

 \TauD(f,g):=∫\pDg(x)\lKstarifD\ddnf(x)ds(x) % for f∈\HtO,g∈\HoO. (9)

By the boundeness of and of the trace operator, it follows that

 \abs\TauD(f,g)≤C\normf\Ht\normg\Ho,

and hence, is bounded. Now, we observe from (2) and (8) that can be rewritten as

 Vsr=\TauD(\Gammas,\Gammar). (10)

#### 2.2.1 Characterization of D by \TauD

One of the interests of is that it determines uniquely the domain , as stated in the following result. Let be open bounded domains with -boundaries with the same conductivity , then if and only if their associated bilinear forms are equal:

 \TauDDD(f,g)=\TauDDD′(f,g) ∀f∈\HtO,g∈\HoO. (11)
###### Proof.

Clearly implies that . Now suppose . There exist a point . Let be a small open neighborhood of such that .

Let verifying that the support of is included in and over . Such a can be constructed for example in the following way. Let be a compactly supported function, whose support is included in and in a small neighborhood of . Then the function

 f:y↦φ(y)⟨y,νx⟩

satisfies the required conditions: its support is included in and (in a neighborhood of , coincides with the function , whose gradient is ).

We set , which is not identically zero because . Consequently, by the density of the image of the trace operator in , there exist such that

 \TauD(f,g)=∫∂Dg(x)h(x)ds(x)≠0.

On the other hand, over because over . So , which implies . ∎

### 2.3 Representation of \TauD

As suggested by Proposition 2.2.1, all information about is contained in . This motivates us to represent in a discrete form (features of ) that will be estimated from the data.

#### 2.3.1 Basis of representation

Let be a Schauder basis of . We denote by the finite dimensional subspace spanned by and the orthogonal projector onto :

 PKf=infg∈VK\normf−gL2(Ω). (12)

We require the following conditions on the basis :

• For any

 \normf−PKfHs(Ω)→0  as K→+∞. (13)
• There exists a function such that for and some , we have . Furthermore, it holds for any

 \normf−PKfHs(Ω)≤CK−u(s,t)\normfHt(Ω)  as K→+∞ (14)

with the constant being independent of and .

#### 2.3.2 Polynomial basis

The first example of is the homogeneous polynomial basis. The property (13) is a direct consequence of the following classical result (see Appendix A for its proof): Let . The family of polynomials is complete in for . Estimate (14) can be obtained using an equivalent result of Legendre polynomials established in [11], with .

#### 2.3.3 Wavelet basis

Another example of is the wavelet basis. Let be a one-dimensional orthonormal scaling function generating a multi-resolution analysis [19], and let be a wavelet which is orthogonal to and has

zero moments. We construct the two-dimensional scaling function

by tensor product as

, and similarly we construct the wavelets for by tensor product of [19]. We denote by

 ψkj,n(x):=2−jψk(2−jx−n), j∈\Z,n∈\Z2.

Then for constitute an orthonormal basis of . Particularly, the Daubechies wavelet of order 6 (with 6 zero moments) fulfills the conditions above [15].

Let be the approximation space spanned by , and be the orthogonal projector onto :

 Pjf=∑n∈\Z2\seqf,ϕj,nϕj,n. (15)

The property (13) follows from the fact that the converges to in for any (see [21, Theorem 6, Chapter 2]).

The wavelet basis introduced above verifies the polynomial exactness of order [19] (, the polynomials of order belong to ) and for . Therefore, we have the following result (see [14, Corollary 3.4.1]): For any

 \normPjf−fHs(Ω)≲2j(t−s)\normfHt(Ω)  as j→−∞. (16)

Then the estimate (14) is fulfilled with . By abuse of notation, throughout this paper, we still use (with ) to denote the projection for the wavelet basis.

#### 2.3.4 Truncation of \TauD

Thanks to the boundedness of and property (13), one can verify easily that for any

 \TauD(f,g)=\Tau(PKf,PKg)+o(1), (17)

with the truncation error decaying to zero as . Using the approximation property (14), a bound on the truncation error can be established. Suppose that the basis fulfills the conditions (13) and (14). Let with the constants and being those of estimate (14). Then for any

 \Abs\TauD(PKf,PKg)−\TauD(f,g)≤CK−~u(t,t′)  as K→+∞, (18)

where the constant depends only on and .

###### Proof.

By the triangle inequality

 \Abs\TauD(PKf,PKg)−\TauD(f,g) ≤\Abs\TauD(PKf−f,PKg)+\Abs\TauD(f,PKg−g). (19)

Using the boundedness of on the first term of the right-hand side, we get

 \Abs\TauD(PKf−f,PKg)≤C\normPKf−fH2\normPKgH1.

On one hand, one can apply (14) on with the constant verifying . On the other hand, for any and we have in by (13). Therefore, we obtain that

 \Abs\TauD(PKf−f,PKg)≲K−u(2,t)\normfHt\normgH1.

Similarly, for the second term of the right-hand side in (19) we get

 \Abs\TauD(f,PKg−g)≤C\normfH2\normPKg−gH1≲K−u(1,t′)\normfH2\normgHt′,

which holds for any for the constant of (14). Combining these two terms yields the desired result. ∎

#### 2.3.5 Coefficient matrix

We define the coefficient matrix as follows

 \mX=\mX[D,K]=(\TauD(em,en))mn  for m,n=1…K, (20)

which represents under the basis up to the order . We denote by the coefficient vector of , ,

 P\vidxf=∑n≤\vidx\mbffnen, (21)

and similarly for . Then restricted on can be put into the following matrix form:

 \TauD(PKf,PKg)=K∑m,n=1\mbffm\TauD(em,en)\mbfgn=\mbff⊤\mX\mbfg. (22)

Thanks to property (13) of the basis, there is a one-to-one mapping between and its coefficient matrix as . Hence the domain is also uniquely determined from when , as a consequence of Proposition 2.2.1.

### 2.4 Linear system

In (10), the Green functions and play the role of the measurement system while the information about is contained in the operator . This motivates us to separate these two parts and extract information about from the data .

By removing a small neighborhood of and if necessary, we can always assume that does not contain any source or receiver, in such a way that and restricted on become , and hence can be represented using the basis (note that since , removing the singularity does not affect which depends only on the value of the Green functions on ).

We denote in the sequel the (column) coefficient vectors of and , respectively. From (10), (17) and (22) one can write

 Vsr=\TauD(\Gammas,\Gammar)=\bgammas⊤\mX\bgammar+Esr (23)

with being the truncation error of order which can be controlled using Proposition 2.3.4. We introduce the matrices of the measurement system

 \mGx=[\bgammax1…\bgammaxNs],  \mGy=[\bgammay1…\bgammayNr], (24)

as well as the linear operator :

 \mL(\mX)=\mGx⊤\mX\mGy. (25)

Then (23) can be put into a matrix product form:

 \V=\mGx⊤\mX\mGy+\mE=\mL(\mX)+\mE (26)

with being the matrix of truncation error. Further, suppose that is contaminated by some measurement noise , , the

-th coefficient follows the normal distribution

 (\Wnoise)mn\iid\normallaw\snoise

with being the noise level. Using the bound (18) of the truncation error, we can assume that for a large

 \absEsr≪\snoise (27)

uniformly in all and , so that can be neglected compared to the noise. Finally, we obtain a linear system relating the coefficient matrix to the data

 \V=\mL(\mX)+\Wnoise, (28)

and the objective is then to estimate from by solving (28).

#### 2.4.1 Measurement systems and stability

The stability of the operator is inherent to the spatial distribution of sources and receivers that we suppose to be coincident in what follows. The far-field measurement system (Figure 1 (a)) is the situation when the characteristic distance between transmitters and the boundary of the target is much larger than the size of the target. On the contrary, in the near-field internal measurement system (Figure 1 (b)) which is used in micro-EIT [18, 20], we have and the transmitters can be placed “inside” the target. Other types of far-field measurements exist; see [1, 4].

Note that a singular value of

is the product of a pair of singular values of and . In Figure 2 we compare the distribution of the singular values of the operator (computed with the Daubechies wavelet of order 6 as ) corresponding to the systems of Figure 1. One can notice the substantial difference between these two systems: the singular values of the far-field system decays very fast, revealing the exponential ill-posedness of the associated inverse problem [2, 3, 6]. On the contrary, for the near-field system, the stability of is considerably improved. This can be explained by the decay of the wavelet coefficients of the Green function. In fact, is smooth away from , therefore most rows (populated by detail wavelet coefficients) in have tiny numerical values which make the matrix ill-conditioned. More precisely, the following result holds. We refer to Appendix B) for its proof. Let be a compact domain. Suppose that and denote by the distance between and . If the wavelet has zero moments, then as :

 \abs\seq\Gammas,\psikjn≍2j(p+1)ρ−p for n∈Λkj, (29)

where .

As a consequence of the stability, the data of the far-field measurement reside in a low dimensional subspace while those of the near-field measurement reside in a high dimensional subspace. Therefore, the type of estimation of and afortiori the type of basis for the implementation of estimator, should be adapted to the physical configuration of the measurement system. In the next sections, we show that in the case of the far-field measurement, the polynomial basis with linear estimation is well suited, while for the near-field measurement case it is possible to use a wavelet basis which creates a high dimensional but sparse matrix , and to reconstruct more information by a nonlinear estimator.

## 3 Polynomial basis and linear estimation

Under the homogeneous polynomial basis, the coefficients in are given as follows:

 \mXαβ:=\TauD(xα,xβ) (30)

with being multi-indices. These are also referred as Generalized Polarization Tensors (GPTs) in the perturbation theory of small inclusions [5, 7]. The coefficient vectors and in (23) are now obtained by the Taylor expansion of the Green functions:

 Vsr=K∑\absα,\absβ=1(−1)\absα+\absβα!β!\pαΓ(xs)\mXαβ[D]\pβΓ(yr)+Esr. (31)

Moreover, the truncation error can be expressed explicitly, and in case of the far-field measurement it decays as [2] (with being the size of and being the radius of the measurement circle), which is far better than the previous bound (18).

Expression (31) can be simplified to the matrix product form (26) by recombining all of order and using coefficients of harmonic polynomials. The resulting linear operator is injective for the far-field system (Figure 1 (a)) having transmitters, and its singular value decays as ; see [2, 3].

### 3.1 Linear estimator of \mX

Due to the global character of the polynomials, in general the coefficient matrix is full. On the other hand, the fast decay of truncation error under the polynomial basis suggests that the energy of is concentrated in the low order coefficients. Therefore, the simple truncation in the reconstruction order provides an effective regularization, and the first order coefficients can be estimated by solving the least-squares problem

 \Xest:=\argmin\mX\Norm\mbfL(\mX)−V2F, (32)

where denotes the Frobenius norm. The following bound on the error of the estimation can be established: for ,

 √\Exp\Paren(\Xest)mn−(\mX)mn2≤C\snoiseNSmn(δ/ρ)−m+n2. (33)

As a consequence, the maximum resolving order is bounded by [2]

 K≲logδ/ρ\snoise. (34)

Hence, the far-field measurement has a very limited resolution. However, the first few orders of coefficients contain important geometric information of the shape (the first order tells how a target resembles an equivalent ellipse), and can be used to construct shape descriptors for the identification of shapes. We refer the reader to [2] for detailed numerical results.

## 4 Wavelet basis and nonlinear estimation

In this section we use the wavelet (more exactly, the scaling function ) introduced in section 2.3.1 for the representation and the reconstruction of . This yields a sparse and local representation, and makes the wavelets an appropriate choice of basis for the near-field measurement which allows the reconstruction via minimization and the visual perception of .

### 4.1 Wavelet coefficient matrix

For the wavelet basis, we use the scale number (in place of as in section 2.3) to denote the truncation order. The coefficient matrix under the wavelet basis contains the approximation coefficients

 \mXn,n′ =\TauD(ϕL,n,ϕL,n′), (35)

so that with being the orthogonal projector onto the approximation space as introduced in section 2.3.3, and are the coefficient vectors

 \mbff=[\set\seqf,ϕL,nn], \mbfg=[\set\seqg,ϕL,nn]. (36)

The matrix of a flower-shaped target is shown in Figure 3 (a).

### 4.2 Properties of the wavelet coefficient matrix

In the following we discuss some important properties of the wavelet coefficient matrix.

##### Bounds of matrix norm

Proposition 4.2 establishes bounds on the spectral norm of showing that diverges as . The proof is based on the inverse estimate and the polynomial exactness of the wavelet basis, and is given in Appendix C. When , the spectral norm of the matrix is bounded by

 C′2−L≤\norm\mX2 ≤C2−3L (37)

with being some constants independent of .

##### Sparsity

From the definition of we observe that is non-zero only when the support of both wavelets intersect . Therefore, the non-zero coefficients of carry geometric information on . Moreover, is a sparse matrix. In fact, when the scale there are wavelets contributing to , so the dimension of is . On the other hand, the number of wavelets intersecting is . Hence, the number of non zero coefficients is about and the sparsity of is asymptotically .

##### Band diagonal structure

Numerical computations show that the pattern of non-zeros in has a band diagonal structure. The largest coefficients appear around several principal band diagonals in a regular manner that reflects different situations of interaction between wavelets via the bilinear form , as shown in Figure 3 (a). We notice that the major band diagonals describe the interaction between a and its immediate neighbors (the width of the band is proportional to the size of the support of ). In particular, the main diagonal corresponds to the case , while the other band diagonals describe the interactions between other non-overlapping and .

##### Localization of \pD

Numerical evidence further suggests that the strongest coefficients appear around the major band diagonal of , , when and are close. Therefore, a large value in the diagonal coefficient indicates the presence of in the support of . We plot the -th column of in Figure 4, which correspond to the interaction between and all others . We call this the localization property of . It indicates that the operator can loosely preserve the essential support of a localized function. The next proposition gives a qualitative explanation when is the unit disk.

Let be a unit disk. As we have

 \abs\TauD(\phiLn,\phiLnp)={O(2−2L) \rm for% overlapped \phiLn,\phiLnp,O(2−L) \rm otherwise. (38)
###### Proof.

For being a unit disk, one has [7]:

 \KstarfDf(x)=14π∫\pDf(y)ds(y).

By simple manipulations one can deduce that

 \lKstarifDf(x)=λ−1(I+(\kcst−1)\KstarD)(f)(x). (39)

Therefore, the bilinear form is reduced to

 ∫\pD\psig\lKstarifD\ddn\psifds=1λ∫\pD\psig\ddn\psifds+\kcst−14πλ∫\pD\psigds∫\pD\ddn\psifds. (40)

Taking and as the intersection between and the support of is well approximated by a line segment. By a change of variables, it follows that

 \intpd\phiLnpds\intpd\pν\phiLnds=O(2−L). (41)

Similarly, one has

 \intpd\phiLnp\pν\phiLnds={O(2−2L) for% overlapped \phiLn,\phiLnp,0 otherwise. (42)

Substituting (41) and (42) into (40) yields (38). ∎

Hence, as , the coefficients of the main diagonal behave like , and dominate the other band diagonals that behave like .

### 4.3 Wavelet based imaging algorithms

The localization property of can be used to visualize the target . A simple algorithm, called imaging by diagonal, consists in taking the diagonal of (, the coefficients ) and reshaping it to a 2D image. Then the boundary of can be read off from the image.

A drawback of this method is that the generated boundary has low resolution. In fact, any touching the boundary is susceptible to yield a numerically non-negligible value of . Hence, larger is the support of the wavelet, more are the wavelets intersecting and lower is the resolution. An improved method consists in searching for each index , the index maximizing the interaction between :

 n′=\argmaxn′∈\Z2\abs\TauD(\phiLn,ϕL,n′), (43)

and then accumulating for the index . We name this method imaging by maximum. It is higher in resolution since the effect of the wavelets touching merely is absorbed by their closest neighbors lying on . The procedure is summarized in Algorithm 1. Figures 5 (a) and (b) show a comparison between these two algorithms.

### 4.4 Reconstruction of \mX by ℓ1 minimization

As the scale decreases, the dimension of increases rapidly as . On the other hand, the band diagonal structure of shows that the largest coefficients distribute on the major band diagonals, which is an important a priori information allowing to reduce considerably the dimension of the unknown to be reconstructed.

For this, we fix a priori and assume that the coefficient can be neglected when . We construct accordingly a band diagonal mask taking values or by choosing proportional to the support size of the wavelet. Remark that the mask constructed in this way is not adaptive and does not contain any information about the boundary of the target. Figure 3 (b) shows a mask with .

Given the high dimension of and its sparsity, we seek a sparse solution by solving the minimization problem as follows [19, 13, 10]:

 \Xest:=\argmin\mX\Norm\mbfL(M⊙X)−V2+μ\Norm\mbfM⊙\mX1,w, (44)

where is the band diagonal mask (Figure 3 (b)), denotes the termwise multiplication, is the regularization parameter, and is the reweighted norm. We set the weight in such a way that the operator is normalized columnwisely. The constant is determined by the universal threshold [19] (tuned manually to achieve the best result if necessary):

 μ∝\snoise√NsNr√2log\norm\mbfM1 (45)

with being the number of non-zero values in . Problem (44) admits a unique sparse solution [19] under appropriate conditions, and can be solved numerically via efficient algorithms; see for example [8].

## 5 Numerical experiments

In this section we present some numerical results to illustrate the efficiency of the wavelet imaging algorithm proposed in section 4.3. The wavelet used here is the Daubechies wavelet of order 6. We consider a near-field measurement system (Figure 1 (b)) with uniformly distributed sources and receivers.

### 5.1 Parameter settings

We set the conductivity constant and use a flower-shaped target as . The whole procedure of the experiment is as follows. First, the data are simulated by evaluating (8) for all sources and receivers

of the measurement system. A white noise of level

 \snoise=σ0\norm\VF/√NsNr (46)

is added to obtain the noisy data , with being the percentage of noise in data. Thereafter the minimization problem (44) is solved with the parameters and methods described in section 4.4. Finally, from the reconstructed coefficients , we apply Algorithm 1 to obtain a pixelized image of .

The vectors in (23) contain the wavelet coefficients of the Green functions and respectively. They are computed by first sampling and on a fine Cartesian lattice of sampling step on (the singularity point is numerically smoothed) and then applying the discrete fast wavelet transform on these samples to obtain the coefficients at the desired scale.

### 5.2 Results of the imaging algorithm

Figures 6 (a, b) show the results of imaging obtained at the scale with different noise levels. It can be seen that even in a highly noisy environment () the boundary of can still be correctly located.

Remark that for the near-field internal measurement system Figure 1 (b), one can obtain an image of directly from the data (in fact, being defined by (8) has large amplitude if and/or is close to ). Nonetheless, such a direct imaging method is far less robust to noise than the wavelet based algorithm and its resolution is limited by the density of the transmitters, as shown in Figures 6 (c, d).

In Figure 7 the same experiments of imaging with noisy data were conducted at the scale . We notice that the images returned by Algorithm 1 have dimension , which is much higher than that of the grid of transmitters (). Furthermore, the results remain robust up to the noise level

. These confirm the super-resolution character of the wavelet based imaging algorithm.

## 6 Discussion

### 6.1 Effect of the conductivity constant \kcst

The constant as defined in section 2 is actually the ratio between the conductivity of the target and the background (set to 1 in this paper). Further numerical experiments suggest that the performance of Algorithm 1 depend on : the results may deteriorate when becomes large ( ). This can be explained easily for the case of a unit disk. In fact, it can be seen from (40) that the ratio between the overlapped and non overlapped (in terms of the functions ) coefficients of varies with as . Hence, the localization property (section 4.2) becomes more (resp. less) pronounced when (resp. ), and the imaging algorithm is impacted accordingly. Nonetheless, we note also that when , the target becomes indistinguishable from the background and the measured data decreases to zero (without considering the noise). These observations suggest that in practice there may exist some numerical ranges for and for the noise level on which the imaging algorithm is more or less effective.

### 6.2 Representation with the wavelet ψ

In section 4 we used only the scaling function for the approximation and the representation of , while it is also possible to use the wavelet functions together with to represent and obtain another form of . More precisely, let be the detail space spanned by and be the orthogonal projectors onto . For any two scales , it holds

 \TauD(PLf,PLg)= J∑j,j′=L+1\myubrace\TauD(Qjf,Qj′g)Dj,j′+J∑j′=L+1\myubrace\TauD(PJf,Qj′g)CJ,j′ +J∑j=L+1\myubrace\TauD(Qjf,PJg)Bj,J+\myubrace\TauD(PJf,PJg)AJ,J=\mbff⊤\mX\mbfg, (47)

where we used the fact that , and are respectively the coefficient vectors of and under the basis . The coefficient matrix now takes the form

 \mX=\mX[D,L,J]=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝DL+1,L+1…DL+1,JBL+1,J⋮⋱⋮⋮DJ,L+1…DJ,JBJ,JCJ,L+1…CJ,JAJ,J⎞⎟ ⎟ ⎟ ⎟ ⎟⎠, (48)

where and denote the block matrices corresponding to the terms marked by braces in (6.2) respectively. In particular, contains the detail coefficients of type with , while contains the approximation coefficients . Remark that in the case , is reduced to which is identical to the coefficient matrix defined in (35).

Moreover, one can easily prove (using the conjugated filters) that for any , and , regarded as vectors, are equivalent up to an unitary transform. Therefore the choice of the scale is not important since is equivalent to for any , and their dimensions are asymptotically equal as for a fixed domain .

A natural question is to know whether the equivalent representation with is more sparse than . In Figure 8 we plot the decay of the coefficients of the four block matrices in with . It can be seen that for the numerical range considered here, the detail coefficients have similar decay as the approximation coefficients. In fact, like , the main reason for the sparsity of the detail coefficients is the intersection between the support of wavelets and the boundary , and for the same reason the localization property (, Proposition 4.2) remains valid for the wavelets . Hence the representation has a similar sparsity as and does not present substantial advantages for the applications considered in this paper.