# Low-frequency source imaging in an acoustic waveguide

Time-harmonic far-field source array imaging in a two-dimensional waveguide is analyzed. A low-frequency situation is considered in which the diameter of the waveguide is slightly larger than the wavelength, so that the waveguide supports a limited number of guided modes, and the diameter of the antenna array is smaller than the wavelength, so that the standard resolution formulas in open media predict very poor imaging resolution. A general framework to analyze the resolution and stability performances of such antenna arrays is introduced. It is shown that planar antenna arrays perform better (in terms of resolution and stability with respect to measurement noise) than linear (horizontal or vertical) arrays and that vertical linear arrays perform better than horizontal arrays, for a given diameter. However a fundamental limitation to imaging in waveguides is identified that is due to the form of the dispersion relation. It is intrinsic to scalar waves, whatever the complexity of the medium and the array geometry.

## Authors

• 6 publications
• ### Compressive Sensing Based Design of Sparse Tripole Arrays

This paper considers the problem of designing sparse linear tripole arra...
03/29/2016 ∙ by Matthew Hawes, et al. ∙ 0

• ### Linear Antenna Array with Suppressed Sidelobe and Sideband Levels using Time Modulation

In this paper, the goal is to achieve an ultra low sidelobe level (SLL) ...
11/08/2012 ∙ by Swaprava Nath, et al. ∙ 0

• ### Pulsed Schlieren Imaging of Ultrasonic Haptics and Levitation using Phased Arrays

Ultrasonic acoustic fields have recently been used to generate haptic ef...
09/29/2018 ∙ by Michele Iodice, et al. ∙ 0

• ### Deep Learning for Direction of Arrival Estimation via Emulation of Large Antenna Arrays

We present a MUSIC-based Direction of Arrival (DOA) estimation strategy ...
07/27/2020 ∙ by Aya Mostafa Ahmed, et al. ∙ 0

• ### Graph Attention Network For Microwave Imaging of Brain Anomaly

So far, numerous learned models have been pressed to use in microwave im...
08/04/2021 ∙ by A. Al-Saffar, et al. ∙ 0

• ### A Spatial Sampling Approach to Wave Field Synthesis: PBAP and Huygens Arrays

A simple approach to microphone- and speaker-arrays is described in whic...
11/18/2019 ∙ by Julius O. Smith III, et al. ∙ 0

• ### Examples of usage of nearfield acoustic holography methods for far field estimations: Part 1. CW signals

The paper is devoted to the usage of nearfield acoustic holography metho...
11/27/2018 ∙ by Mikhail B. Salin, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We present a theoretical and numerical study of source imaging in two-dimensional waveguides, using an array of sensors that record acoustic waves. Source imaging in waveguides is of particular interest in underwater acoustics [6, 17]. In a closed waveguide the wavefield can be decomposed into a finite number of guided modes and an infinite number of evanescent modes. In an open waveguide the wavefield can be decomposed into a finite number of guided modes and an infinite number of radiating and evanescent modes. The evanescent, resp. radiating, mode components of the wavefield are in general vanishing and not usable in the measured far-field data because they decay exponentially, resp. algebraically, with the propagation distance. The guided mode amplitudes can be extracted from the measured data if the array is large enough and one can then propose an imaging method that exploits them. The idea of formulating the inverse problem in terms of the guided mode amplitudes has recently been considered by several authors, for source imaging [4] and for scatterer imaging [2, 5, 8, 19, 22]. However, the extraction of the guided mode amplitudes becomes challenging when the array is small [27, 28, 29].

In underwater acoustics, it is possible to deploy an antenna array in the oceanic waveguide but the aperture of the array is usually limited. This issue is critical when addressing low-frequency signals whose wavelengths are of the same order as the diameter of the waveguide so that 1) there is only a small number of guided modes and 2) the array diameter is smaller than the wavelength. This is typically the configuration we have in mind in this paper. We introduce a general framework to analyze the performances (in terms of resolution and stability) of such antenna arrays. Under ideal circumstances (i.e. in the absence of noise) the data collected by an antenna array covering a limited part of the cross section of a waveguide can be manipulated and processed to transform them into the set of data that would have been collected by a vertical antenna covering the full cross section of the waveguide, which gives full access to the guided mode amplitudes. We explain this processing in detail in this paper. In more realistic configurations (i.e. in the presence of noise) the processing can become unstable and requires appropriate regularization, the imaging performance is determined by the effective rank of an operator, which depends on the array geometry and the noise level, and we analyze different types of antennas. We show that, for a given diameter, planar antenna arrays perform much better (in terms of stability with respect to measurement noise) than linear (vertical or horizontal) arrays, and that vertical linear arrays perform better than horizontal linear arrays. However we exhibit and clarify a fundamental limitation to imaging in waveguides that is due to the form of the dispersion relation and that is intrinsic to scalar waves, whatever the complexity of the medium and the array geometry.

The paper is organized as follows. In Sections 2 and 3 we describe the waveguide geometry and source array imaging problem. In Section 4

we show how to estimate the guided mode amplitudes from the array data. In Section

5 we address the case of large and dense antenna arrays (large means larger than the wavelength). In Section 6 we address in detail the case of small and discrete antenna arrays and consider different array geometries.

## 2 Waveguide geometry

Let us consider a two-dimensional waveguide, whose axis is along the -axis, and the cross section is (see figure 1). For the sake of simplicity we may consider a Dirichlet condition at (free surface) and a Neumann or Dirichlet condition at (bottom). The forthcoming results can be extended to arbitrary closed or open waveguides, such as Pekeris waveguides. The index of refraction can be constant or variable but it depends only on . The wavefield transmitted by a time-harmonic source at frequency satisfies the Helmholtz equation

 (d2dx2+d2dz2)p(x,z)+ω2c2(z)p(x,z)=−s(x,z),(x,z)∈R×(0,L) (1)

subjected to the appropriate boundary conditions at .

The eigenmodes (real-valued and orthonormal) and eigenvalues (real-valued) of the self-adjoint operator

at frequency are denoted by and :

 d2dz2ϕj(z)+ω2c2(z)ϕj(z)=λjϕj(z). (2)

There are guided modes for which and we set , . The other modes for which are evanescent (i.e. their amplitudes decay exponentially in ). We assume thoughout the paper that the frequency is such that .

## 3 Source imaging

We consider the case of an antenna array localized in the neighborhood of the plane . We assume that the antenna array is supported in the domain . The domain can be:
(i) a finite collection of points (discrete array),
(ii) a square (continuum approximation of a dense planar array),
(iii) a vertical line (continuum approximation of a dense linear vertical array localized at depth ),
(iv) a horizontal line (continuum approximation of a dense linear horizontal array localized at depth ).
We present a unified approach of these cases and we remark that this approach can be readily extended to other cases. In each case we associate a corresponding uniform measure with unit mass over , such that for any test function :

 (3)

A time-harmonic acoustic signal is transmitted by a distant source localized in the region (see figure 1). The recorded signal is

 p(x,z)=N∑j=1aj,oϕj(z)exp(−iβjx),(x,z)∈A, (4)

where the mode amplitudes are determined by the source and where we have not written the evanescent modes, which is justified when the distance from the source to the antenna array is much larger than the wavelength. This expression shows that the maximal information about the source available in the data

recorded by the antenna array is the vector

. The imaging procedure can be decomposed into two steps: 1) estimation of the vector and 2) exploitation of the estimated vector to localize the source.

If we can obtain an estimate of the vector from the data , then we can migrate the vector in order to localize the source in the region by application of the imaging function defined by:

 I[\itbfa](x,z):=2iN∑j=1βjeiβjxϕj(z)¯¯¯¯¯aj, (5)

where is the compactly supported search region and the bar stands for complexe conjugate. We can check that, in the case of a point-like source at , , we have , , and if we can estimate perfectly the vector from the data (which happens in particular when the antenna array spans the waveguide cross section, see below), then the imaging function has the form

 I[\itbfao](x,z)=N∑j=1eiβj(x−xo)ϕj(z)ϕj(zo), (6)

which is a peak centered at the source position . The resolution and stability properties of this imaging function (5) have been analyzed in [4]. The main result is that the width of the peak is approximately equal to the resolution limit , where is the wavelength (with background velocity).

###### Remark 3.1

The imaging function (5) is actually a reverse-time migration-type function [11, Chapter 20] (see also [16, 18, 20, 23, 12]). Indeed, a reverse-time imaging function can be defined as :

 IRT[p](x,z):=−4∫A∂2x′^G(x,z;x′,z′)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯p(x′,z′)μ(d\itbfx′), (7)

where is the Green’s function of the waveguide. If we take into account only the guided modes, then the Green’s function has the form:

 ^G(x,z;x′,z′)=i2N∑j=11βjeiβj|x−x′|ϕj(z)ϕj(z′), (8)

and we find that, for ,

 IRT[p](x,z)=2iN∑j,j′=1βjeiβjxϕj(z)¯¯¯¯¯¯¯¯¯Ajj′¯¯¯¯¯¯¯¯¯aj′,o, (9)

where

 Ajj′=∫Aei(βj−βj′)x′ϕj(z′)ϕj′(z′)μ(d\itbfx′), (10)

which is close to the function defined by (5) when is close to . Reverse-time migration functions are known to be efficient source imaging functions as they can be seen as the solutions of least squares imaging [13, Chapter 4, Section 4.1]. They are the best estimators to localize point-like sources in the search domain (here, the interior of the waveguide), in the sense that the position of the maximum of the modulus of the reverse-time migration function is the maximum likelihood estimator of the source position when the source is point-like and when the data are corrupted by additive noise [1].

The imaging function (5) is very efficient and has good resolution properties, but it requires to estimate the mode amplitudes of the recorded wavefield. If the antenna array is dense, vertical and spans the full cross section of the waveguide, then the mode amplitudes can be easily obtained by projection of the observed wavefield onto the mode profiles:

 ∫L0p(x=0,z)ϕj(z)dz=ao,j,j=1,…,N. (11)

We will see in the next section that it is possible to get good estimates of the mode amplitudes even when the antenna array covers only a limited part of the cross section of the waveguide.

## 4 Estimation of the mode amplitudes

When the antenna array covers only a limited part of the cross section of the waveguide we would like to extract the vector from only. This is actually possible, provided we know the mode profiles and the modal wavenumbers , .

### 4.1 Perfect estimation

In absence of any noise or measurement error, the following method can be implemented to estimate the vector (this is a general version of the weighted projection method proposed in [28]):

1. Compute the Hermitian positive semi-definite matrix of size (as in (10)):

 Ajl:=∫Aϕj(z)ϕl(z)ei(βj−βl)xμ(d\itbfx),j,l=1,…,N. (12)
2. Diagonalize the matrix , with diagonal matrix and unitary matrix (here and below stands for conjugate and transpose).

3. Introduce the reduced mode profiles:

 ψl(x,z):=N∑j=1(VA)jlϕj(z)e−iβjx,l=1,…,N,(x,z)∈A. (13)
4. Compute the vector from the data by projection onto the reduced mode profiles:

 bl=∫Ap(x,z)¯¯¯¯¯ψl(x,z)μ(d\itbfx),l=1,…,N. (14)
5. Compute the vector

(If is singular, then use the Moore-Penrose pseudo-inverse of instead of , i.e. the diagonal matrix with diagonal coefficients if and otherwise).

###### Proposition 4.1

If is nonsingular, then .

Proof. Let us study the method (12-15). We have

From (4), we get

 bl=N∑l′=1∫A¯¯¯¯¯ψl(x,z)ϕl′(z)e−iβl′xμ(d\itbfx)al′,o=N∑l′=1(DAV†A)ll′al′,o,

i.e., . If is nonsingular, then is positive definite and all eigenvalues of are not zero. We then get by (15):

the last equality follows from the unitarity of the matrix .

### 4.2 Regularized estimation

The final step (15) requires the matrix to be positive-definite and well-conditioned for stability. The conditioning of the matrix is determined by the geometry of the array . When the array does not cover the cross section of the waveguide, the conditioning of may be poor and one should use a regularized pseudo-inverse for  :

where

 Dϵ,+A=Diag((ψϵ((DA)jj))Nj=1), (18)

with

 ψϵ(DA)=DA/(D2A+ϵ2) (Tykhonov % regularization) (19)

or

 ψϵ(DA)=(1/DA)1(ϵ,+∞)(DA) (% hard threshold regularization). (20)

We observe that we may not recover exactly the mode amplitudes when using the regularized method:

where the last term is an error term given in terms of the diagonal matrix defined by

 RϵA=Diag((1−(DA)jjψϵ((DA)jj))Nj=1). (22)

In the case of Tikhonov regularization, we have . In the case of hard threshold regularization, we have .

### 4.3 Regularized estimation with measurement noise

As is well-known [3, 28] and as is shown by (21

), regularization induces a bias, i.e. a deterministic error, but it makes the estimation method much more robust with respect to noise, i.e. it can reduce the random error due to measurement noise. This is a manifestation of the classical bias-variance tradeoff

[15]. In order to illustrate this general statement, we here assume that the measurements are corrupted by an additive complex circular Gaussian noise:

 pmeas(x,z)=p(x,z)+w(x,z),(x,z)∈A, (23)

where is a Gaussian process with mean zero and delta covariance function:

 (24)

for (here if and otherwise, and is the Dirac distribution).

The estimated vector (17) is here given by

where the vector is obtained by projecting the measurements onto the reduced mode profiles as in (14):

 bmeas,l=∫Apmeas(x,z)¯¯¯¯¯ψl(x,z)μ(d\itbfx). (26)
###### Proposition 4.2

The mean square error consists of a bias term and a variance term:

 E[∥\itbfaϵ−\itbfao∥2] = ∥∥E[\itbfaϵ]−\itbfao∥∥2+E[∥\itbfaϵ−E[\itbfaϵ]∥2], (27) ∥∥E[\itbfaϵ]−\itbfao∥∥2 = N∑j=1[1−(DA)jjψϵ((DA)jj)]2|(V†A\itbfao)j|2, (28) E[∥\itbfaϵ−E[\itbfaϵ]∥2] = σ2N∑j=1(DA)jjψϵ((DA)jj)2. (29)

Proof. The vector (26) has the form

 bmeas,l=∫Apmeas(x,z)¯¯¯¯¯ψl(x,z)μ(d\itbfx)=∫Ap(x,z)¯¯¯¯¯ψl(x,z)μ(d\itbfx)+wl,l=1,…,N,

with . The random vector is Gaussian with mean zero and covariance matrix:

 E[wl¯¯¯¯¯¯wl′]=σ2∫A¯¯¯¯¯ψl(x,z)ψl′(x,z)μ(d\itbfx)=σ2(V†AAVA)ll′=σ2(DA)ll′.

This means that the random variables

are independent Gaussian with mean zero and variances .

The estimated vector (25) has mean

 E[\itbfaϵ]=\itbfao−VARϵAV†A\itbfao,

and covariance

The mean square error consists of a bias term and a variance term:

 E[∥\itbfaϵ−\itbfao∥2] = ∥∥E[\itbfaϵ]−\itbfao∥∥2+E[∥\itbfaϵ−E[\itbfaϵ]∥2],

with

 ∥∥E[\itbfaϵ]−\itbfao∥∥2 = ∥VARϵAV†A\itbfao∥2 = N∑j=1[1−(DA)jjψϵ((DA)jj)]2|(V†A\itbfao)j|2,

and

 E[∥\itbfaϵ−E[\itbfaϵ]∥2] = = σ2N∑j=1(DA)jjψϵ((DA)jj)2.

###### Corollary 4.3

When , there exists a positive and finite that minimizes the mean square error.

In other words, regularization is always advantageous as soon as there is measurement noise.

Proof. For Tykhonov regularization (19) the mean square error reads

 E[∥\itbfaϵ−\itbfao∥2]=N∑j=1ϵ4((DA)2jj+ϵ2)2|(V†A\itbfao)j|2+σ2N∑j=1(DA)3jj((DA)2jj+ϵ2)2.

As :

 E[∥\itbfaϵ−\itbfao∥2]=σ2N∑j=11(DA)jj−2ϵ2σ2N∑j=11(DA)3jj+Oϵ→0(ϵ4),

which shows that is a strictly decreasing function close to . As :

 E[∥\itbfaϵ−\itbfao∥2]=N∑j=1|(V†A\itbfao)j|2−ϵ−2N∑j=1(DA)2jj|(V†A\itbfao)j|2+Oϵ→+∞(ϵ−4),

which shows that is a strictly increasing function at infinity. Since is continuous this shows that the exists an optimal that minimizes the mean square error and this optimal is positive and finite.

## 5 Large dense antenna array

In this section we address the case of a large dense antenna array in a waveguide consisting of a large number of modes. “Large antenna array” means much larger than the wavelength and “dense antenna array” means that the Nyquist criterium is satisfied by the locations of the antennas so that the continuum approximation is valid. “Large number of modes” means that the diameter of the cross section of the waveguide is much larger than the wavelength. This situation corresponds to a high-frequency regime (i.e. small wavelength), which is not the main focus of this paper, but this regime has motivated recent work in the literature. We report in this section some interesting and original results about the performances of horizontal and vertical antenna arrays.

We will see below that the spectrum of the matrix corresponding to a large dense antenna array typically contains two parts: positive eigenvalues for and vanishing eigenvalues for . We can then say that is the effective rank of the matrix, and the mean square error is approximately:

 E[∥\itbfaϵ−\itbfao∥2]≃N∑j=rA+1|(V†A\itbfao)j|2≃N−rAN∥\itbfao∥2,

where we have used the rough approximation . This shows that the quality of the estimation is directly related to the effective rank of the matrix and the performance of the antenna array is all the better as its effective rank is larger.

### 5.1 Vertical antenna array

The case of a vertical antenna array occupying the line in a homogeneous waveguide with bakground speed and Dirichlet boundary conditions is addressed in [27, 28]. The number of guided modes is

 N=⌊koLπ⌋=⌊2Lλo⌋, (30)

where is the wavelength. In our framework, the problem is reduced to the analysis of the matrix

 Ajl = 12a∫2a0ϕj(z)ϕl(z)dz (31) = 1Lsinc(2π(l−j)aL)−1Lsinc(2π(l+j)aL),

because . is a real, symmetric Toeplitz-minus-Hankel matrix. Its spectral properties are determined by the Toeplitz part, which can be studied in detail by the analysis conducted by Slepian about the discrete prolate spheroidal sequence [26]. When and , the spectrum can be decomposed into three parts: there is a cluster of eigenvalues close to , another cluster of eigenvalues close to , and an intermediate layer of eigenvalues in between that decay from to . The number of eigenvalues in the intermediate layer is . The number of “significant” eigenvalues close to is approximately equal to

 [N2aL]=[4aλo].

The number of “significant” eigenvalues, i.e. the effective rank of the matrix, is the length of the array divided by the resolution limit .

The case of a vertical antenna array occupying the line in a homogeneous waveguide is similar and the analysis of the previous case can be extended by the work of [25], as shown in [28]. The results are similar in terms of numbers of “significant” eigenvalues: The effective rank of the matrix is the length of the array divided by the resolution limit .

Finally, we can consider the general case of a vertical antenna array that occupies a set of disjoint intervals , within .

###### Proposition 5.1

The matrix obtained with a vertical antenna array occupying the lines , , has an effective rank when .

Proof. The matrix has the form

 Ajl = 12∑Pk=1akP∑k=1∫bk+akbk−akϕj(z)ϕl(z)dz =

with

 ρ(s)=12∑Pk=1akP∑k=11[π(bk−ak)/L,π(bk+ak)/L](s).

By [10, Theorem 3.2] the eigenvalues of the matrix satisfy for any continuous function

 1NN∑j=1g(σj)N→+∞⟶1π∫π0g(ρ(s))ds.

This means that the empirical distribution of the eigenvalues of the matrix weakly converges as to a measure supported by the two points and :

 1NN∑j=1δσj(dσ)N→+∞⟶(1−2∑Pk=1akL)δ0(dσ)+2∑Pk=1akLδ1/[2∑Pk=1ak](dσ).

This shows that the effective rank of the matrix is .

In other words, the effective rank is the total length of the array divided by the resolution limit . It is interesting to note that, for a homogeneous waveguide and in the continuum approximation, the spatial distribution of the receivers along the vertical cross section does not play any role, only the total length of the linear antenna array plays a role.

### 5.2 Horizontal antenna array

The case of a horizontal antenna array occupying the line in a homogeneous waveguide is qualitatively similar. The matrix has the form

 Ajl = 12a∫2a0ϕj(za)ϕl(za)ei(βj−βl)xdx (32) = 2Lsin(πjzaL)eiβjasin(πlzaL)e−iβlasinc((βj−βl)a).

Unless corresponds to a node of a mode, the spectral properties of are related to those of the matrix , which looks like the sinc kernel addressed by Slepian, upon substitution . The theoretical analysis of this case, as far as we know, has not yet been carried out. We will first do numerical simulations to propose some conjectures and then we will give the theoretical results.

Based on numerical simulations (see figure 2), we get the following conjecture: When and , the spectrum can be decomposed into two parts: there is a cluster of eigenvalues of order and another cluster of eigenvalues close to (see figure 2a). The number of significant eigenvalues is approximately equal to when is small, and smaller than when becomes of order one (see figure 2b). Note that is one half the number of significant eigenvalues for a vertical antenna array with the same length. This conjecture is proved in the following proposition in a more general case.

We now address the case of a horizontal antenna array occupying the disjoint intervals , , in a homogeneous waveguide, with .

###### Proposition 5.2

For almost every , the matrix obtained with a horizontal antenna array occupying the lines , , has an effective rank equal to when and the total length of the antenna array is much smaller than .

Proof. The matrix has the form

 Ajl = 12∑Pk=1akP∑k=1∫bk+akbk−akϕj(za)ϕl(za)ei(βj−βl)xdx (33) = 1Lsin(πjzaL)sin(πlzaL)1∑Pk=1akP∑k=12akei(βj−βl)bksinc((βj−βl)ak).

We first show the following result: If is a vector with non-zero entries and is a symmetric real matrix, then the rank of and are equal. Indeed, if is the rank of , then with orthornomal vectors and nonzero , and therefore with for and . The vectors are linearly independent (if , then , and therefore for all ). This shows that the rank of is .

From the previous result, for almost every (for all except