Low Rank Matrix Recovery for Joint Array Self-Calibration and Sparse Model DoA Estimation

by   Cheng-Yu Hung, et al.

In this work, combined calibration and DoA estimation is approached as an extension of the formulation for the Single Measurement Vector (SMV) model of self-calibration to the Multiple Measurement Model (MMV) case. By taking advantage of multiple snapshots, a modified nuclear norm minimization problem is proposed to recover a low-rank larger dimension matrix. We also give the definition of a linear operator for the MMV model, and give its corresponding matrix representation to generate a variant of a convex optimization problem. In order to mitigate the computational complexity of the approach, singular value decomposition (SVD) is applied to reduce the problem size. The performance of the proposed methods are demonstrated by numerical simulations.


page 1

page 2

page 3

page 4


Joint Block Low Rank and Sparse Matrix Recovery in Array Self-Calibration Off-Grid DoA Estimation

This letter addresses the estimation of directions-of-arrival (DoA) by a...

OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage

The truncated singular value decomposition (SVD) of the measurement matr...

Efficient Low-Rank Matrix Learning by Factorizable Nonconvex Regularization

Matrix learning is at the core of many machine learning problems. To enc...

RIP-based performance guarantee for low-tubal-rank tensor recovery

The essential task of multi-dimensional data analysis focuses on the ten...

Fast Algorithm for Low-rank matrix recovery in Poisson noise

This paper describes a fast algorithm for recovering low-rank matrices f...

Low-rank representation of tensor network operators with long-range pairwise interactions

Tensor network operators, such as the matrix product operator (MPO) and ...

Low-rank Matrix Recovery With Unknown Correspondence

We study a matrix recovery problem with unknown correspondence: given th...

I Introduction

DoA estimation algorithms usually assume perfect knowledge of the array responses for all directions of interest. Such knowledge necessitates perfectly calibrated sensors in both phase and gain. Maintenance of such calibration under varying physical conditions, and over time is difficult, and in many cases expensive. Accordingly, methods that can provide calibration algorithmically and automatically are of great interest. This paper is concerned with the development of a self-calibration method in the context of sparsity promoting DoA estimation.

To design an efficient self-calibration algorithm is a challenging problem. Among the self-calibration methods, the maximum a posteriori (MAP) estimation [1] is a powerful one to jointly estimate the signals of interest and the calibration parameters. But this approach suffers from excessive computational complexity such that it is not suitable to real-time applications. Some self-calibration algorithms were developed based on the eigendecomposition of a covariance matrix to estimate the phase and gain of the calibration error. Examples of this lower computational complexity approach, which is called eigenstructure-based (ES) methods, can be found in [2, 3, 4, 5]. In [6], the blind calibration formulation and methods were developed, and the necessary and sufficient condition for estimating the calibration offsets is characterized. In [7], norm minimization is used to formulate the blind calibration problem, which is highly non-convex. In [8], the approximate message passing algorithm combined with the blind calibration problem is considered, and solved by a convex relaxation algorithm. In [9], several convex optimization methods were proposed for solving the blind calibration of sparse inverse problems. In [10], a self-calibration problem is introduced and solved in the framework of biconvex compressed sensing via a SparseLift method, which is inspired by PhaseLift [11, 12, 13] that is about the ”Lifting” technique. The notion of ”Lifting” is used for blind deconvolution [14, 15], which attempts to recover two unknown signals from their convolution.

In this work, we extend the Ling’s work [10] from single measurement vector (SMV) system to multiple measurement vector (MMV) system. By taking advantage of multiple snapshots of measurement in the self-calibration problem, a new problem is formulated and a low-rank matrix is generated, but with larger dimension. We also give the definition of linear operator for the MMV model, and its corresponding matrix representation so that we can generate a variant of convex optimization problem. In order to mitigate the computational complexity of the method, singular value decomposition (SVD) is applied to reduce the problem size. The contribution of this work is that we can significantly improve the performance of the SparseLift method [10] with slightly increased computational complexity. Our proposed method is verified in the direction-of-arrival (DoA) estimation. The performance of the proposed methods are demonstrated by numerical simulations and compared with Ling’s work [10], and the eigenstructure-based method [2].

Ii Array Self-Calibration Model Preliminaries

Ii-a Self-Calibration Model

A generic self-calibration problem [10] in compressed sensing is given by


where is the measurements, is the measurement matrix parameterized by an unknown calibration vector , is the desired signal, and is additive noise. If is assumed sparse, an -norm minimization problem is proposed


This optimization problem is non-convex with associated difficulties for its solution. The most common approach is to use the alternating method, i.e., solve x for fixed h, and solve h for fixed x. However, (2) is too general to solve in an efficient numerical framework. Thus, an important special case of (1) is considered


where is the observation vector, is a known fat matrix, is a -sparse signal of interest, and is additive white Gaussian noise vector. is a diagonal matrix that depends on unknown parameter , and is known. This case is based on the assumption that the unknown calibration parameters lie in the subspace (column space or range) of .

Ii-B Self-calibration and DoA Estimation in MMV System

The self-calibration for the single measurement vector (SMV) model presented in [10] is now extended to the joint DoA estimation and self-calibration in the multiple measurement vector (MMV) case. Suppose that we have snapshots of measurement vectors for a linear uniform array with candidate (grid) directions of arrival . Then, the MMV model is the following


where is measurement matrix, is a diagonal matrix that depends on unknown parameter , is a known fat matrix with columns with wavelength , and is a sparse matrix of interest whose columns are all -sparse with the same sparsity pattern. is additive white Gaussian noise matrix where elements have zero-mean and

-variance, and

is composed of the first

columns of the Discrete Fourier Transform (DFT) matrix, which models slow changes in the calibrations of the sensors. Formulation (

4) is a generalization of a SMV system. The MMV structure and the group sparsity property of will be exploited to enhance the performance of DoA estimation, i.e., the accuracy of estimated DoA .

Iii The Proposed Method

In this section, a new Lifting technique, Joint SparseLift, is proposed to exploit the MMV structure, and a nuclear norm minimization problem is proposed to estimate . In order to express the idea explicitly, the case of snapshots for MMV system is first assumed, i.e., , . It is easy to extend the work to any case of .

Iii-a Joint SparseLift

First, consider , the -th row of the measurement matrix without noise. Then,


where is the -th column of , is the -th row of , and are rank-one matrices. Thus, we reformulate as






is also a rank-one matrix by concatenating two rank-one matrices.
Define the linear operator such that


where .
The adjoint operator of , and are also given by


where .
Then, we can estimate by solving a nuclear norm minimization problem


where the nuclear norm is the sum of singular values of matrix . But, we still need the matrix representation of such that


By using the Kronecker product property, that for any matrix , , we can derive the block form of as

where represents Kronecker product. The block form of is then


where , is the -th column of , and is the -th column of . represents the -th entry of and denotes a zero vector of dimension .
So, we can solve the following convex problem


Note that rank-one matrix is of a larger size than the case in SMV. The columns of share the same sparsity pattern. The group sparsity of can thus be promoted by

where , and denotes the row of . Since minimizing the nuclear norm has high computational complexity, and always holds, it is sufficient to solve


After the estimate of

is obtained, SVD is used to obtain its eigenvector with the largest eigenvalue, which will give the estimates of

and .

Recall that the matrix size is . If the number of snapshots is very large, the computational complexity will be substantial. In order to mitigate this issue, a complexity reduction method is applied in the next subsection.

Iii-B Complexity Reduction and Analysis

The optimization problem in (17) can be reformulated into second-order cone programming (SOCP) [16], and solved by interior point methods. The computational complexity is , composed by interior point implementation cost per iteration, and iteration complexity .
Consider the MMV model (4) with . Since the matrix size of , , and become larger compared with the case of , singular value decomposition (SVD) can be used to reduce the problem size. Taking the SVD of



is a unitary matrix,

is a rectangular diagonal matrix of singular values that are nonnegative, and is a unitary matrix. Denote where is a identity matrix, and is a zero matrix. Then, a reduced matrix can be obtained by


where , , and . Then, the reduced-sized convex optimization problem is the following


The problem size is much lower than such that the overall computational complexity is significantly reduced to plus SVD complexity [17], since .

This method needs prior knowledge on the number of the received signals.

Iv Numerical Results

In this section, numerical simulation is conducted to compare the performance of the proposed methods with the eigenstructure (ES) method, and Ling’s work. A ULA of or sensors with is considered. The DoA search space is discretized from to with separation, i.e., and . There are far-field plane waves from the actual DoAs with . Narrowband, zero-mean, and uncorrelated sources for the plane waves are assumed, and the noise is AWGN with zero-mean and unit variance. The number of snapshots is set to . The value of is set to . Calibration error is given by , where , whose columns are the first columns of DFT matrix. One hundred realizations are performed at each SNR. The root mean square error (RMSE) of DoAs estimation is defined as . When solving the optimization problem, the regularization parameters are carefully selected to achieve the best performance.

Fig. 1: Angle space vs signal amplitude at SNR=15 dB, .
Fig. 2: Angle space vs signal amplitude at SNR=25 dB, .
Fig. 3: RMSE of DoA estimation versus SNR, .

In Figure 1, the accuracy of estimated DoAs for one realization is shown when sensors is used at 15 dB SNR. Since a large number of sensors are used, the peaks of signal amplitude are located at the true DoAs for the proposed method and Ling’s work. In this scenario, the difference in accuracy is not obvious, except that some small peaks appear abound the true DoAs for the Ling’s method. However, when sensors at SNR =25 dB, the accuracy performance of the proposed method is better than the Ling’s approach as seen in Figure 2. The estimated DoAs of the proposed method are at the true locations, while the Ling’s method are not. In fact in the latter method one of the true DoAs is missed. The improved performance by the proposed method comes from the additional information in MMV system which is used to estimate DoAs.

In Figure 3, the RMSE of DoA estimation is investigated when sensors is used. At RMSE=10, the proposed method outperforms Ling’s method about 17 dB. Figure 4 shows that the RMSE performance improves with increasing number of snapshots. The largest improvement occurs when the number of snapshots is between 1 and 300. In Figure 5, by using the complexity-reduction technique, the computational complexity increases slightly in terms of cpu time even for the snapshots at each realization.

Fig. 4: RMSE of DoA estimation versus number of snapshots, SNR=15 dB.
Fig. 5: CPU time consumption versus number of snapshots, SNR=15 dB.

V Summary

In this paper, the combined calibration and DoA estimation problem for a particular calibration error model and using sparsity promoting algorithms is discussed. We extended the formulation in [10] to the MMV system, and proposed a new nuclear norm minimization approach to take advantage of the information brought by multiple measurement vectors. The performance improvement from the use of multiple snapshots is demonstrated by simulations. We also used singular value decomposition to reduce the computational complexity of the proposed approach, and verified its performance numerically.


  • [1] A. Swindlehurst, “A maximum a posteriori approach to beamforming in the presence of calibration errors,” in Statistical Signal and Array Processing, 1996. Proceedings., 8th IEEE Signal Processing Workshop on (Cat. No. 96TB10004.   IEEE, 1996, pp. 82–85.
  • [2] A. J. Weiss and B. Friedlander, “Eigenstructure methods for direction finding with sensor gain and phase uncertainties,” Circuits, Systems and Signal Processing, vol. 9, no. 3, pp. 271–300, 1990.
  • [3] C. See, “Sensor array calibration in the presence of mutual coupling and unknown sensor gains and phases,” Electronics Letters, vol. 30, pp. 373–374, 1994.
  • [4] A. Liu, G. Liao, C. Zeng, Z. Yang, and Q. Xu, “An eigenstructure method for estimating doa and sensor gain-phase errors,” IEEE Transactions on signal processing, vol. 59, no. 12, pp. 5944–5956, 2011.
  • [5] D. Astély, A. L. Swindlehurst, and B. Ottersten, “Spatial signature estimation for uniform linear arrays with unknown receiver gains and phases,” IEEE Transactions on Signal Processing, vol. 47, no. 8, pp. 2128–2138, 1999.
  • [6] L. Balzano and R. Nowak, “Blind calibration of sensor networks,” in Proceedings of the 6th international conference on Information processing in sensor networks.   ACM, 2007, pp. 79–88.
  • [7] R. Gribonval, G. Chardon, and L. Daudet, “Blind calibration for compressed sensing by convex optimization,” in 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2012, pp. 2713–2716.
  • [8] C. Schulke, F. Caltagirone, F. Krzakala, and L. Zdeborová, “Blind calibration in compressed sensing using message passing algorithms,” in Advances in Neural Information Processing Systems, 2013, pp. 566–574.
  • [9] Ç. Bilen, G. Puy, R. Gribonval, and L. Daudet, “Convex optimization approaches for blind sensor calibration using sparsity,” IEEE Transactions on Signal Processing, vol. 62, no. 18, pp. 4847–4856, 2014.
  • [10] S. Ling and T. Strohmer, “Self-calibration and biconvex compressive sensing,” Inverse Problems, vol. 31, no. 11, p. 115002, 2015.
  • [11] X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM Journal on Mathematical Analysis, vol. 45, no. 5, pp. 3019–3033, 2013.
  • [12] E. J. Candes, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Communications on Pure and Applied Mathematics, vol. 66, no. 8, pp. 1241–1274, 2013.
  • [13] E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM review, vol. 57, no. 2, pp. 225–251, 2015.
  • [14] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding blind deconvolution algorithms,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 12, pp. 2354–2367, 2011.
  • [15] A. Ahmed, B. Recht, and J. Romberg, “Blind deconvolution using convex programming,” IEEE Transactions on Information Theory, vol. 60, no. 3, pp. 1711–1732, 2014.
  • [16] D. Malioutov, M. Çetin, and A. S. Willsky, “A sparse signal reconstruction perspective for source localization with sensor arrays,” Signal Processing, IEEE Transactions on, vol. 53, no. 8, pp. 3010–3022, 2005.
  • [17] G. H. Golub and C. F. Van Loan, Matrix computations.   JHU Press, 2012, vol. 3.