Multi-Step Knowledge-Aided Iterative ESPRIT for Direction Finding

05/01/2018
by   S. F. B. Pinto, et al.
puc-rio
0

In this work, we propose a subspace-based algorithm for DOA estimation which iteratively reduces the disturbance factors of the estimated data covariance matrix and incorporates prior knowledge which is gradually obtained on line. An analysis of the MSE of the reshaped data covariance matrix is carried out along with comparisons between computational complexities of the proposed and existing algorithms. Simulations focusing on closely-spaced sources, where they are uncorrelated and correlated, illustrate the improvements achieved.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/21/2017

Report on Two-Step Knowledge-Aided Iterative ESPRIT Algorithm

In this work, we propose a subspace-based algorithm for direction-of-arr...
11/19/2018

Study of Multi-Step Knowledge-Aided Iterative Nested MUSIC for Direction Finding

In this work, we propose a subspace-based algorithm for direction-of-arr...
03/31/2018

Improving Portfolios Global Performance with Robust Covariance Matrix Estimation: Application to the Maximum Variety Portfolio

This paper presents how the most recent improvements made on covariance ...
12/16/2018

Direction Finding Based on Multi-Step Knowledge-Aided Iterative Conjugate Gradient Algorithms

In this work, we present direction-of-arrival (DoA) estimation algorithm...
03/15/2019

Joint Mean-Covariance Estimation via the Horseshoe with an Application in Genomic Data Analysis

Seemingly unrelated regression is a natural framework for regressing mul...
10/20/2018

Estimating the Number of Sources: An Efficient Maximization Approach

Estimating the number of sources received by an antenna array have been ...
11/22/2018

Bayesian Alternatives to the Black-Litterman Model

The Black-Litterman model combines investors' personal views with histor...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In array signal processing, direction-of-arrival (DOA) estimation is a key task in a broad range of important applications including radar and sonar systems, wireless communications and seismology [1]. Traditional high-resolution methods for DOA estimation such as the multiple signal classification (MUSIC) method [2], the root-MUSIC algorithm [3], the estimation of signal parameters via rotational invariance techniques (ESPRIT) [4] and subspace techniques [5, 6, 7, 8, 9, 10, 11, 70, 26, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 37, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56],[57, 58, 59] exploit the eigenstructure of the input data matrix. These techniques may fail for reduced data sets or low signal-to-noise ratio (SNR) levels where the expected estimation error is not asymptotic to the Cramér-Rao bound (CRB) [60]. The accuracy of the estimates of the covariance matrix is of fundamental importance in parameter estimation. Low levels of SNR or short data records can result in significant divergences between the true and the sample data covariance matrices. In practice, only a modest number of data snapshots is available and when the number of snapshots is similar to the number of sensor array elements, the estimated and the true subspaces can differ significantly. Several approaches have been developed with the aim of enhancing the computation of the covariance matrix [61]-[70] and for dealing with large sensor-array systems large [71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 97, 88, 89, 90, 91, 92, 93, 94, 95, 96, 98, 99],[100, 101, 102, 103, 104, 105, 106, 111, 108, 109, 110, 111, 112, 113, 114, 115, 118, 117, 118, 119, 120].

Diagonal loading [61] and shrinkage [62, 63, 64]

techniques can enhance the estimate of the data covariance matrix by weighing and individually increasing its diagonal by a real constant. Nevertheless, the eigenvectors remain the same, which leads to unaltered estimates of the signal and noise projection matrices obtained from the enhanced covariance matrix. Additionally, an improvement of the estimates of the covariance matrix can be achieved by employing forward/backward averaging and spatial smoothing approaches

[65, 66]. The former leads to twice the number of the original samples and its corresponding enhancement. The latter extracts the array covariance matrix as the average of all covariance matrices from its sub-arrays, resulting in a greater number of samples. Both techniques are employed in signal decorrelation. An approach to improve MUSIC dealing with the condition in which the number of snapshots and the sensor elements approach infinity was presented in [67]. Nevertheless, this technique is not that effective for reduced number of snapshots. Other approaches to deal with reduced data sets or low SNR levels [68, 70] consist of reiterating the procedure of adding pseudo-noise to the observations which results in new estimates of the covariance matrix. Then, the set of solutions is computed from previously stored DOA estimates. In [121]

, two aspects resulting from the computation of DOAs for reduced data sets or low SNR levels have been studied using the root-MUSIC technique. The first aspect dealt with the probability of estimated signal roots taking a smaller magnitude than the estimated noise roots, which is an anomaly that leads to wrong choices of the closest roots to the unit circle. To mitigate this problem, different groups of roots are considered as potential solutions for the signal sources and the most likely one is selected

[122]. The second aspect previously mentioned, shown in [123], refers to the fact that a reduced part of the true signal eigenvectors exists in the sample noise subspace (and vice-versa). Such coexistence has been expressed by a Frobenius norm of the related irregularity matrix and introduced its mathematical foundation. An iterative technique to enhance the efficacy of root-MUSIC by reducing this anomaly making use of the gradual reshaping of the sample data covariance matrix has been reported. Inspired by the work in [121], we have developed an ESPRIT-based method known as Two-Step KAI-ESPRIT (TS-ESPRIT) [124], which combines that modifications of the sample data covariance matrix with the use of prior knowledge [125]-[131] about the covariance matrix of a set of impinging signals to enhance the estimation accuracy in the finite sample size region. In practice, this prior knowledge could be from the signals coming from known base stations or from static users in a system. TS-ESPRIT determines the value of a correction factor that reduces the undesirable terms in the estimation of the signal and noise subspaces in an iterative process, resulting in better estimates.

In this work [132, 133]

, we present the Multi-Step KAI ESPRIT (MS-KAI-ESPRIT) approach that refines the covariance matrix of the input data via multiple steps of reduction of its undesirable terms. This work presents the MS-KAI-ESPRIT in further detail, an analysis of the mean squared error (MSE) of the data covariance matrix free of undesired terms (side effects), a more accurate study of the computational complexity and a comprehensive study of MS-KAI-ESPRIT and other competing techniques for scenarios with both uncorrelated and correlated signals. Unlike TS-ESPRIT, which makes use of only one iteration and available known DOAs, MS-KAI-ESPRIT employs multiple iterations and obtains prior knowledge on line. At each iteration of MS-KAI-ESPRIT, the initial Vandermonde matrix is updated by replacing an increasing number of steering vectors of initial estimates with their corresponding refined versions. In other words, at each iteration, the knowledge obtained on line is updated, allowing the direction finding algorithm to correct the sample covariance matrix estimate, which yields more accurate estimates.

In summary, this work has the following contributions:

  • The proposed MS-KAI-ESPRIT technique.

  • An MSE analysis of the covariance matrix obtained with the proposed MS-KAI-ESPRIT algorithm.

  • A comprehensive performance study of MS-KAI-ESPRIT and competing techniques.

This paper is organized as follows. Section II describes the system model. Section III presents the proposed MS-KAI-ESPRIT algorithm. In section IV, an analytical study of the MSE of the data covariance matrix free of side-effects is carried out together with a study of the computational complexity of the proposed and competing algorithms. In Section V, we present and discuss the simulation results. Section VI concludes the paper.

Ii System Model

Let us assume that P narrowband signals from far-field sources impinge on a uniform linear array (ULA) of sensor elements from directions . We also consider that the sensors are spaced from each other by a distance , where is the signal wavelength, and that without loss of generality, we have .

The th data snapshot of the -dimensional array output vector can be modeled as

(1)

where represents the zero-mean source data vector,

is the vector of white circular complex Gaussian noise with zero mean and variance

, and denotes the number of available snapshots.

The Vandermonde matrix , known as the array manifold, contains the array steering vectors corresponding to the th source, which can be expressed as

(2)

where . Using the fact that and are modeled as uncorrelated linearly independent variables, the signal covariance matrix is calculated by

(3)

where the superscript H and in and in denote the Hermitian transposition and the expectation operator and stands for the

-dimensional identity matrix. Since the true signal covariance matrix is unknown, it must be estimated and a widely-adopted approach is the sample average formula given by

(4)

whose estimation accuracy is dependent on .

Iii Proposed MS-KAI-ESPRIT Algorithm

In this section, we present the proposed MS-KAI-ESPRIT algorithm and detail its main features. We start by expanding (4) using (1) as derived in [121]:

(5)

The first two terms of in (5) can be considered as estimates of the two summands of given in (3), which represent the signal and the noise components, respectively. The last two terms in (5) are undesirable side effects, which can be seen as estimates for the correlation between the signal and the noise vectors. The system model under study is based on noise vectors which are zero-mean and also independent of the signal vectors. Thus, the signal and noise components are uncorrelated to each other. As a consequence, for a large enough number of samples , the last two terms of (5) tend to zero. Nevertheless, in practice the number of available samples can be limited. In such situations, the last two terms in (5) may have significant values, which causes the deviation of the estimates of the signal and the noise subspaces from the true signal and noise subspaces.

The key point of the proposed MS-KAI-ESPRIT algorithm is to modify the sample data covariance matrix estimate at each iteration by gradually incorporating the knowledge provided by the newer Vandermonde matrices which progressively embody the refined estimates from the preceding iteration. Based on these updated Vandermonde matrices, refined estimates of the projection matrices of the signal and noise subspaces are calculated. These estimates of projection matrices associated with the initial sample covariance matrix estimate and the reliability factor are employed to reduce its side effects and allow the algorithm to choose the set of estimates that has the highest likelihood of being the set of the true DOAs. The modified covariance matrix is computed by computing a scaled version of the undesirable terms of , as pointed out in (5).

The steps of the proposed algorithm are listed in Table I. The algorithm starts by computing the sample data covariance matrix (4). Next, the DOAs are estimated using the ESPRIT algorithm. The superscript refers to the estimation task performed in the first step. Now, a procedure consisting of iterations starts by forming the Vandermonde matrix using the DOA estimates. Then, the amplitudes of the sources are estimated such that the square norm of the differences between the observation vector and the vector containing estimates and the available known DOAs is minimized. This problem can be formulated [121] as:

(6)

The minimization of (6) is achieved using the least squares technique and the solution is described by

(7)

The noise component is then estimated as the difference between the estimated signal and the observations made by the array, as given by

(8)

After estimating the signal and noise vectors, the third term in (5) can be computed as:

(9)

where

(10)

is an estimate of the projection matrix of the signal subspace, and

(11)

is an estimate of the projection matrix of the noise subspace.

Next, as part of the procedure consisting of iterations, the modified data covariance matrix is obtained by computing a scaled version of the estimated terms from the initial sample data covariance matrix as given by

(12)

where the superscript refers to the iteration performed. The scaling or reliability factor increases from 0 to 1 incrementally, resulting in modified data covariance matrices. Each of them gives origin to new estimated DOAs also denoted by the superscript by using the ESPRIT algorithm, as briefly described ahead.

In this work, the rank P is assumed to be known, which is an assumption frequently found in the literature. Alternatively, the rank P could be estimated by model-order selection schemes such as Akaike´s Information Theoretic Criterion (AIC) [144] and the Minimum Descriptive Length (MDL) Criterion [145].

In order to estimate the signal and the orthogonal subspaces from the data records, we may consider two approaches [146, 147]

: the direct data approach and the covariance approach. The direct data approach makes use of singular value decomposition(SVD) of the data matrix

, composed of the th data snapshot (1) of the -dimensional array data vector:

(13)

Since the number of the sources is assumed known or can be estimated by AIC[144] or MDL[145] , as previously mentioned, we can write as:

(14)

where the diagonal matrices and contain the largest singular values and the smallest singular values, respectively. The estimated signal subspace consists of the singular vectors corresponding to and the orthogonal subspace is related to . If the signal subspace is estimated a rank-P approximation of the SVD can be applied.

The covariance approach applies the eigenvalue decomposition (EVD) of the sample covariance matrix (

4), which is related to the data matrix (13):

(15)

Then, the EVD of (15) can be carried out as follows:

(16)

where the diagonal matrices and contain the P largest and the M-P smallest eigenvalues, respectively. The estimated signal subspace corresponding to and the orthogonal subspace complies with . If the signal subspace is estimated a rank-P approximation of the EVD can be applied. With infinite precision arithmetic, both SVD and EVD can be considered equivalent. However, as in practice, finite precision arithmetic is employed, ’squaring’ the data to obtain the Gramian (15) can result in round-off errors and overflow. These are potential problems to be aware when using the covariance approach.

Now, we can briefly review ESPRIT. We start by forming a twofold subarray configuration, as each row of the array steering matrix corresponds to one sensor element of the antenna array. The subarrays are specified by two -dimensional selection matrices and which choose elements of the existing sensors, respectively, where is in the range . For maximum overlap, the matrix selects the first elements and the matrix selects the last rows of .

Since the matrices and have now been computed, we can estimate the operator by solving the approximation of the shift invariance equation (17) given by

(17)

where is obtained in (16).

Using the least squares (LS) method, which yields

(18)

where denotes the Frobenius norm and stands for the pseudo-inverse.

Lastly, the eigenvalues of contain the estimates of the spatial frequencies computed as:

(19)

so that the DOAs can be calculated as:

(20)

where for (19) and (20) .

Then, a new Vandermonde matrix is formed by the steering vectors of those refined estimates of the DOAs. By using this updated matrix, it is possible to compute the refined estimates of the projection matrices of the signal and the noise subspaces.

Next, employing the refined estimates of the projection matrices, the initial sample data matrix, , and the number of sensors and sources, the stochastic maximum likelihood objective function [122] is computed for each value of at the iteration, as follows:

(21)

The previous computation selects the set of unavailable DOA estimates that have a higher likelihood at each iteration. Then, the set of estimated DOAs corresponding to the optimum value of that minimizes (21) also at each iteration is determined. Finally, the output of the proposed MS-KAI-ESPRIT algorithm is formed by the set of the estimates obtained at the iteration, as described in Table I.

Inputs:
,  ,  ,  ,  
Received vectors ,  ,,
Outputs:
Estimates,  ,,  
First step:
Second step:
for n =  1 : P
for
,  ,,  
end for
end for
TABLE I: Proposed MS-KAI-ESPRIT Algorithm

Iv Analysis

In this section, we carry out an analysis of the MSE of the data covariance matrix free of side effects along with a study of the computational complexity of the proposed MS-KAI-ESPRIT and existing direction finding algorithms.

Iv-a MSE Analysis

In this subsection we show that at the first of the iterations, the MSE of the data covariance matrix free of side effects is less than or equal to the MSE of that of the original one . This can be formulated as:

(22)

or, alternatively, as

(23)

The proof of this inequality is provided in the Appendix.

Iv-B Computational Complexity Analysis

In this section, we evaluate the computational cost of the proposed MS-KAI-ESPRIT algorithm which is compared to the following classical subspace methods: ESPRIT [4], MUSIC [2], Root-MUSIC [3], Conjugate Gradient (CG) [138], Auxiliary Vector Filtering (AVF) [139] and TS-ESPRIT [124]. The ESPRIT and MUSIC-based methods use the Singular Value Decomposition (SVD) of the sample covariance matrix (4). The computational complexity of MS-KAI-ESPRIT in terms of number of multiplications and additions is depicted in Table II, where . The increment is defined in Table I. As can be seen, for this specific configuration used in the simulations V MS-KAI-ESPRIT shows a relatively high computational burden with , where is typically an integer that ranges from to . It can be noticed that for the configuration used in the simulations and are comparable, resulting in two dominant terms. It can also be seen that the number of multiplications required by the proposed algorithm is more significant than the number of additions. For this reason, in Table III, we computed only the computational burden of the previously mentioned algorithms in terms of multiplications for the purpose of comparisons. In that table, stands for the search step.

Next, we will evaluate the influence of the number of sensor elements on the number of multiplications based on the specific configuration described in Table II. Supposing narrowband signals impinging a ULA of sensor elements and available snapshots, we obtain Fig. 1. We can see the main trends in terms of computational cost measured in multiplications of the proposed and analyzed algorithms. By examining Fig. 1, it can be noticed that in the range sensors, the curves describing the exact number of multiplications in MS-KAI-ESPRIT and AVF tend to merge. For , this ratio tends to , i.e. the number of multiplications are almost equivalent.

Multiplications
MS-KAI
-ESPRIT
(Proposed)
Additions
TABLE II: Computational complexity - MS-KAI-ESPRIT
Algorithm Multiplications
MUSIC [2]
root-MUSIC[3]
AVF [139]
CG [138]
ESPRIT[4]
TS-ESPRIT [124]*
TABLE III: Computational complexity - other algorithms
Fig. 1: Number of multiplications as powers of 10 versus number of sensors for , .

V Simulations

In this section, we examine the performance of the proposed MS-KAI-ESPRIT in terms of probability of resolution and RMSE and compare them to the standard ESPRIT [4], the Iterative ESPRIT (IESPRIT), which is also developed here by combining the approach in [121] that exploits knowledge of the structure of the covariance matrix and its perturbation terms, the Conjugate Gradient (CG) [138], the Root-MUSIC [3], and the MUSIC [2] algorithms. Despite TS-ESPRIT is based on the knowledge of available known DOAS and the proposed MS-KAI-ESPRIT does not have access to prior knowledge, TS-ESPRIT is plotted with the aim of illustrating the comparisons. For a fair comparison in terms of RMSE and probability of resolution of all studied algorithms, we suppose that we do not have prior knowledge, that is to say that although we have available known DOAs, we compute TS-ESPRIT as they were unavailable. We employ a ULA with M=40 sensors, inter-element spacing and assume there are four uncorrelated complex Gaussian signals with equal power impinging on the array. The closely-spaced sources are separated by , at , and the number of available snapshots is N=25. For TS-ESPRIT, as previously mentioned, we presume a priori knowledge of the last true DOAS

In Fig. 2, we show the probability of resolution versus SNR. We take into account the criterion [141], in which two sources with DOA and are said to be resolved if their respective estimates and are such that both and are less than . The proposed MS-KAI-ESPRIT algorithm outperforms IESPRIT developed here, based on [121], and the standard ESPRIT [4] in the range between and and MUSIC [2] from to . MS-KAI-ESPRIT also outperforms CG [138] and Root-Music [3] throughout the whole range of values. The poor performance of the latter could be expected from the results for two closed signals obtained in [121]. When compared to TS-ESPRIT, which as previously discussed, was supposed to have the best performance, the proposed MS-KAI-ESPRIT algorithm is outperformed by the former only in the range between and . From this last point to its performance is superior or equal to the other algorithms.

In Fig. 3, it is shown the RMSE in dB versus SNR, where the term CRB refers to the square root of the deterministic Cramér-Rao bound [142]. The RMSE is defined as:

(24)

where is the number of trials.

The results show the superior performance of MS-KAI-ESPRIT in the range between and dB. From this last point to dB, MS-KAI-ESPRIT, IESPRIT, ESPRIT and TS-ESPRIT have similar performance. The only range in which MS-KAI-ESPRIT is outperformed lies in the range between and dB. From this last point to dB its performance is better or similar to the others.

Fig. 2: Probability of resolution versus SNR with uncorrelated sources, , , runs
Fig. 3: RMSE and the square root of CRB versus SNR with uncorrelated sources, , , runs

Now, we focus on the performance of MS-KAI-ESPRIT under more severe conditions, i.e., we analyze it in terms of RMSE when at least two of the four equal-powered Gaussian signals are strongly correlated, as shown in the following signal correlation matrix (25):

(25)

The signal-to-noise ratio is defined as .

Fig. 4: RMSE and the square root of CRB versus SNR with correlated sources, , , runs

In Fig. 4, we can see the performance of the same algorithms plotted in Fig. 3 in terms of versus SNR computed after runs, when the signal correlation matrix is given by (25). As can be seen, the superior performance of MS-KAI-ESPRIT occurs in the whole range between and dB , which can be considered a small but consistent gain. From dB to dB MS-KAI-ESPRIT, TS-ESPRIT, IESPRIT and ESPRIT have similar performance. The values for which MS-KAI-ESPRIT is outperformed are in the range between and dB.

In Fig. 5, we have provided further simulations to illustrate the performance of each iteration of MS-KAI ESPRIT in terms of RMSE. The resulting iterations can be compared to each other and to the the original ESPRIT, which corresponds to the first step of MS-KAI ESPRIT. For this purpose, we have considered the same scenario employed before, except for the number of the trials, which is runs for all simulations. In particular, we have considered the case of correlated sources. From Fig. 6, which is a magnified detail of Fig. 5, it can be seen that the estimates become more accurate with the increase of iterations.

Fig. 5: RMSE for each iteration of MS-KAI ESPRIT,original ESPRIT and CRB versus SNR with correlated sources, , , runs
Fig. 6: RMSE for each iteration of MS-KAI ESPRIT,original ESPRIT and CRB versus SNR with correlated sources, , , runs -magnification

Vi Conclusions

We have proposed the MS-KAI-ESPRIT algorithm which exploits the knowledge of source signals obtained on line and the structure of the covariance matrix and its perturbations. An analytical study of the MSE of this matrix free of side effects has shown that it is less or equal than the MSE of the original matrix, resulting in better performance of MS-KAI-ESPRIT especially in scenarios where limited number of samples are available. The proposed MS-KAI-ESPRIT algorithm can obtain significant gains in RMSE or probability of resolution performance over previously reported techniques, and has excellent potential for applications with short data records in large-scale antenna systems for wireless communications, radar and other large sensor arrays. The relatively high computational burden required, which is associated with extra matrix multiplications, the increment applied to reduce the undesirable side effects and the iterations needed to progressively incorporate the knowledge obtained on line as newer estimates can be justified for the superior performance achieved. Future work will consider approaches to reducing the computational cost.

Appendix

Here, we prove the inequality (23) described in Section IV-A. We start by expressing the MSE of the original data covariance matrix (4) as:

(26)

where is the true covariance matrix . Similarly, the MSE of the data covariance matrix free of side effects can be expressed for the first iteration by making use of (12), as follows

(27)

where for the sake of simplicity, from now on we omit the superscript , which refers to the first iteration. In order to expand the result in (27), we make use of the following proposition:

Lemma 1: The squared Frobenius norm of the difference between any two matrices and is given by

(28)

Proof of Lemma 1:
The Frobenius norm of any matrix is defined [1] as

(29)

We express as a difference between two matrices and , both also . Making use of Lemma1 and the properties of the trace, we obtain

(30)

which is the desired result.

Now, assuming that the true [134] and the data covariance matrices [134] are Hermitian and using (27) combined with Lemma1, the cyclic [135] property of the trace and the linearity [136] property of the expected value, we get

(31)

By moving the first summand of (31) to its first element, we obtain the intended expression for the difference between the of the data covariance matrix free of perturbations and the original one, i.e.:

(32)

Now, we expand the expressions inside braces of the second member of (32) individually. We start with the first summand

(33)

The equation (33) can be computed by using the projection matrices of the signal and the noise subspaces and the data covariance matrix by using (9), (11), the idempotence [1] [135] of and the cyclic property [135] of the trace. Starting with the computation of its fourth summand, we have

(34)

Taking into account that the data covariance matrix and the estimate of the projection matrix of the noise subspace are Hermitian, we can evaluate the third summand of (33) as follows: