One of the main drawbacks of present-day sparse signal recovery algorithms is their iterative nature and computational complexity, especially in high-dimensional settings. This limits their applicability in practical scenarios. A different challenge arises in applications where it is expensive or time-consuming to acquire measurements. Recent advances in deep neural networks (DNNs) provide the tantalizing possibility of designing fixed-complexity algorithms by learning to invert the NP hard problem of finding the sparsest solution to an under-determined set of linear equations, and this paper presents a computationally efficient deep learning architecture named as Learned-SBL (L-SBL) to accomplish this objective. Moreover, the introduced DNN architecture can recover sparse, block sparse, joint sparse or other structured sparse models from the single or multiple measurement vectors.
In recent literature, DNN based sparse signal recovery has been explored, typically, by unfolding the iterations involved in existing sparse signal recovery algorithms. For example, the learned coordinate descent (LCoD) and learned iterative shrinkage-thresholding algorithm (LISTA) approximate CoD and ISTA using a DNN with a specific architecture and fixed depth . The concept of deep unfolding was introduced to obtain the advantages of both model based methods and DNNs . A deep learning architecture, which outperforms ISTA by unfolding the approximate message passing (AMP) algorithm and vector AMP was introduced in . Using the concept of unfolding, denoising-based approximate message passing (D-AMP) algorithm was approximated as Learned D-AMP . The capability of a DNN to outperform even the algorithms in sparse signal recovery on which it is based was demonstrated theoretically and empirically in 
. The connection between sparse Bayesian learning (SBL) and long short-term memory (LSTM) networks was explored in. Approximating an iterative algorithm using a deep neural network to reduce the computational complexity was demonstrated in . Most of these DNN based sparse signal recovery schemes, directly or indirectly, are inspired by an existing algorithm in sparse signal processing.
There are also examples of DNN architectures that are not based on sparse signal recovery algorithms. A deep learning framework based on stacked denoising autoencoder (SDA) was introduced in, which supports both linear and mildly nonlinear measurements. Majority voting neural networks in the binary compressed sensing problem was proposed in 
, where the output of independently trained feed forward neural networks are combined to obtain an estimate of a binary sparse vector. A computationally efficient approach to learn the sparse representation and recover the unknown signal vector using a deep convolution network was proposed in. An approach of sparse signal recovery using GANs was proposed in , where an additional optimization problem is solved to find a suitable vector in the latent space to generate the desired sparse vector corresponding to the observation. A cascaded DNN architecture to solve the sparse signal recovery problem was introduced in . In , a theoretical framework to design neural architectures for Bayesian compressive sensing was presented.
In many real world applications, the nonzero elements in the sparse vector are clustered. For example, the detection of an extended target using a MIMO radar can be formulated as the recovery of a block sparse signal vector. Different algorithms were proposed in the sparse signal processing literature to recover block sparse vectors. Many existing algorithms assume some prior knowledge about the block boundaries and block sizes. Algorithms like Model-CoSaMp , Block-OMP , Group Basis Pursuit , block-sparse Bayesian learning (BSBL)  etc, assume prior knowledge of the block partitions. Such algorithms are sensitive to the mismatches in the assumed block boundaries. Algorithms like pattern-coupled sparse Bayesian learning (PC-SBL) , expanded block sparse Bayesian learning (EB-SBL)  do not require any prior knowledge about block boundaries. PC-SBL showed superior performance over EB-SBL. However, PC-SBL needs sub-optimum selection of the assumed model parameters and solution, as a closed form solution to the underlying optimization problem is not available. An alternative to the EB-SBL and its relation with PC-SBL is presented in .
However, these algorithms are iterative in nature and computationally expensive. A deep learning based approach to enhance the performance of block sparse signal recovery with reduced computational complexity is less explored in literature. A few DNN based architectures are proposed to recover sparse vectors with multiple measurement vector (MMV) model. A DNN based channel estimation scheme using MMV model is presented in . An LSTM based sparse signal recovery scheme for MMV model is explored in .
In the field of wireless communication, the measurement matrix connecting between the sparse vector and observation vector may depend on the channel between the transmitter and receiver. For example, in , multiuser detection in wireless communication is formulated as a block sparse signal recovery problem and the measurement matrix depends on the channel state information. Most of the DNN based sparse signal recovery schemes existing in literature assume that the observation vector is generated from a fixed measurement matrix. If the measurement matrix changes, the DNN should be trained again with new training data. This training procedure is computationally expensive and can not be done in real-time. Thus, the main drawbacks of the existing DNN based sparse signal recovery schemes are,
Deep learning based block sparse signal recovery schemes without any prior knowledge of block partition using SMV or MMV models are well explored.
Existing deep learning architectures are not suitable in the applications where measurement matrix changes for each measurement acquired.
In this paper, we present a computationally efficient DNN architecture to recover the sparse, block sparse as well as jointly sparse vectors. Our DNN architecture is inspired from the sparse Bayesian learning algorithm and we name the resulting DNN as Learned-SBL (L-SBL). Each layer of L-SBL is similar to an iteration of SBL. The outputs of an L-SBL layer are the estimate of the sparse vector and the diagonal elements of the error covariance matrix. An L-SBL layer comprises two stages. In the first stage, the signal covariance matrix is estimated using the diagonal elements of the error covariance matrix and the estimate of the sparse vector at the output of the previous layer. In the second stage, a MAP estimate of the sparse vector and error covariance matrix are implemented using non-trainable parameters. In L-SBL, any dependency of the measurement vectors on the underlying structure within or among the sparse vectors is captured in the MAP estimation stage by the neural network used in the estimation of signal covariance matrix. Since the measurement matrix is used only in the MAP estimation stage without any trainable parameters, the L-SBL can be trained with a randomly drawn measurement matrix. Therefore, L-SBL can be effectively used in many scenarios where measurement matrix can be arbitrary and different across the multiple measurements. Further, L-SBL can utilize single or multiple measurement vectors during the recovery of sparse vectors. For example, if we train the neural network with single measurement vector, L-SBL behaves as a sparse recovery algorithm similar to basic SBL. If the training data contains block sparse vectors, the L-SBL becomes a block sparse recovery algorithm. That is, L-SBL can learn any underlying structure in the training dataset. Further, L-SBL provides a computationally efficient recovery scheme compared to the corresponding iterative algorithms like SBL, PC-SBL, M-SBL etc. Our main contributions as follows:
We design a deep learning architecture named as Learned-SBL for different sparse signal recovery applications. Based on the nature of the training data, L-SBL can recover sparse, jointly sparse or block sparse vectors from single or multiple measurements.
We compare the performance of L-SBL with other algorithms and show the capability of L-SBL to avoid retraining in the scenarios where measurement matrix is different across the multiple measurements or in scenarios where the specific measurement matrix is not available during the training phase of the DNN.
We evaluate the capability of L-SBL to utilize any existing sparsity pattern among the nonzero elements of the source vectors to enhance the performance.
We examine the weight matrix learned by the L-SBL network in different scenarios. This provides insight into how L-SBL is able to adapt to different underlying sparsity patterns.
We evaluate the performance of L-SBL in the detection of an extended target using MIMO radar.
The rest of the paper is organized as follows. The problem formulation and an overview of SBL, M-SBL and PC-SBL algorithms are presented in section II. In section III, we present our Learned-SBL architecture. We introduce two architectures for the L-SBL layer and we compare the computational complexity of an L-SBL layer with an iteration of the SBL algorithm. We also describe the training algorithm for L-SBL. Numerical simulation results illustrating the performance of L-SBL in recovering sparse as well as block sparse vectors are presented in section IV. In section V, the extended target detection using MIMO radar is formulated as a block sparse signal recovery problem and the performance of L-SBL in target detection is evaluated. We offer some concluding remarks in section VI.
Throughout the paper, bold symbols in small and capital letters are used for vectors and matrices, respectively. denotes the element of the vector . represents the element of the matrix . For the matrix , indicates the row of and indicates the column of . indicates the trace of a matrix. The and norm of the vector are denoted by and , respectively. For a matrix , and denote the transpose and the inverse of the matrix, respectively. For a vector , denotes a diagonal matrix with the elements of vector as the diagonal entries. For a matrix , denotes the column vector containing diagonal elements of .
denotes the multivariate Gaussian distribution anddenotes the Gamma function.
Ii Problem Formulation
We consider the problem of sparse signal recovery from measurement vectors , where is related to the sparse vector by the expression
The above expression can be rearranged as
where denotes the known measurement matrix with , denotes the matrix with multiple measurement vectors and represents the matrix with the sparse vectors as its columns. If is a usual sparse vector, then contains a few arbitrarily located nonzero elements. In the MMV model, we say that are jointly sparse, if the vectors share a common support. In the block-sparse case, the nonzero elements of occur in a small number of clusters. We assume that the noise matrix, denoted as . In Bayesian learning, one seeks the maximum a posteriori (MAP) estimate of , given by:
In Bayesian learning, the prior distribution on as modeled as zero mean Gaussian with covariance matrix . Then, the posterior distribution of is also Gaussian and the MAP estimate of is the posterior mean:
Considering the measurements together, the MAP estimate and the error covariance matrix are given by
We use (LABEL:eq:MAP_Est_MMV) to get the MAP estimate of the sparse vector in each L-SBL layer.
Ii-a Sparse Bayesian Learning (SBL)
Sparse Bayesian learning [19, 22] is a well known algorithm to recover a sparse vector from under-determined set of measurements. The SBL algorithm was originally proposed to recover the sparse vector from single measurement vector . In SBL, the sparse vector is modeled as being Gaussian distributed with a diagonal covariance matrix
where denotes the inverse of the variance of element of the sparse vector . Also
is assumed to be a Gamma distributed random variable with parametersand :
It can be shown that, the prior distribution of with respect to the parameters and is a students-t distribution, which is known to be a sparsity promoting prior distribution. Specifically, for small values and , the students-t distribution has sharp peak at zero, which favors sparsity. To find an estimate , a lower bound of the posterior density,
is maximized using the Expectation Maximization (EM) algorithm. This leads to an iterative recipe, where the update ofat iteration , denoted by is given by
where denotes the estimate of the element of the sparse vector and is the diagonal entry of the estimated error covariance matrix , in the iteration.
Ii-B Sparse Bayesian Learning using Multiple Measurement Vectors (M-SBL)
In , the basic SBL algorithm is extended to handle multiple measurement vectors, resulting in the M-SBL algorithm. M-SBL reduces the failure rate and mean square error by utilizing the joint sparsity across the multiple sparse vectors. In M-SBL, each row of the matrix is assumed to be distributed as a Gaussian random vector,
where the hyperparameters,are Gamma distributed similar to (8). The hyperparameters are estimated by maximizing the posterior density . Similar to the SBL algorithm, the update of is obtained by maximizing a lower bound on using the EM algorithm, which leads to the iterative update equation given by
Ii-C Pattern-Coupled Sparse Bayesian Learning (PC-SBL)
Pattern coupled sparse Bayesian learning  extends SBL algorithm to recover block sparse vectors when the block boundaries are unknown. In PC-SBL, since the nonzero elements occurs as clusters, a coupling model is assumed between the adjacent elements of the vector. Mathematically, the diagonal elements of the signal covariance matrix in (7) is modeled as
where is the non negative coupling parameter, and and are assumed to be zero. In PC-SBL, is assumed to be a Gamma distributed random variable with parameters and , similar to (8). The entanglement of through the coupling parameter precludes a closed form solution in the M-step of the EM algorithm. However, we can find the feasible set for the solution as
where is given by
One major drawback in the PC-SBL algorithm is the sub-optimum selection of the update equation for as the lower bound of the feasible set.:
Due to this, the convergence of the EM algorithm is no longer guaranteed. Also, no theoretical guarantees on the performance is available, even though the algorithm empirically offers excellent recovery performance.
Many algorithms for block sparse signal recovery require prior knowledge about block boundaries. The PC-SBL algorithm does not require any prior knowledge. However, PC-SBL assumes a coupling model in (12), which may be sub-optimum in many practical applications. In this coupling model, leads to the original SBL algorithm and any nonzero value of leads to the adjacent elements being coupled. The optimum choice of the coupling parameter depends on the nature of the block sparse vectors. We can also consider a coupling model other than in (12), for example
The main difficulty in using these models is the difficulty in obtaining a closed form solution of the hyperparameters . In PC-SBL, a sub-optimum solution is chosen as from the feasible set. Other hyperparameters like , and
are also selected heuristically. Major drawbacks of the PC-SBL algorithm are summarized below.
Coupling parameter and number of terms in the coupling model are selected heuristically.
Sub-optimum selection of from the feasible set.
Heuristic selection of the hyperparameters and .
In such scenarios, deep learning can potentially do a better job by optimally estimating these parameters and the coupling model from a training data set.
Algorithms like M-SBL assume the source vectors in multiple measurements are jointly sparse. In several practical applications, the nonzero elements among the sparse vectors may not share a common support. For example, in the direction of arrival (DoA) estimation problem, the simultaneous presence of fast moving targets and stationary targets can create a scenario as shown in Figure 14. In such cases, existing algorithms like M-SBL fail to utilize multiple measurements to improve the performance of sparse signal recovery. In such multiple measurements scenarios, a DNN can possibly learn an inverse function from the training data, which incorporates arbitrary sparsity patterns among the source vectors, to improve the signal recovery performance.
From (9), (11) and (15), we notice that the estimate of the signal covariance matrix in iteration can be expressed as function of the sparse vector and the diagonal elements of the error covariance matrix estimated in iteration. The update of the diagonal element of signal covariance matrix is given by
where depends on the nature of the sparse signal recovery problem. From the training data, the L-SBL can learn the functions , which connects the previous estimate of the sparse vector to the signal covariance matrix in the current iteration. Such a DNN based approach can avoid the major drawbacks of existing approaches. In the MMV model with arbitrary source patterns, L-SBL can learn more suitable functions
by utilizing the patterns among source vectors. Further, the SBL algorithm was derived with the assumption that the probability densityis Gaussian. Deviations from the assumed model may affect the performance of the algorithm. In such scenarios also, the L-SBL can outperform SBL with reduced computational complexity. With this preliminary discussion of the motivation and need for more general sparse recovery techniques, we are now ready to present the L-SBL architecture in the next section.
Iii L-SBL Architecture and Training
Iii-a L-SBL Architecture
The L-SBL architecture with multiple layers is shown in Figure 1. Each layer of the L-SBL is similar to an iteration of the SBL or PC-SBL algorithm. The inputs to the L-SBL network are the measurement matrix , the measurement vector and noise variance . The outputs of the L-SBL layer are the estimate of the sparse vector and the diagonal elements of the error covariance matrix .
A single layer of L-SBL network has two stages. In the first stage, we have a neural network which accepts the estimate of sparse vector and diagonal elements of the error covariance matrix from the previous layer and gives the diagonal elements of the signal covariance matrix as the outputs. The designed L-SBL architecture can learn functions similar to (9), (11) or (15) depending on the nature of the training data. The DNN can also learn a better mapping which minimizes the mean square error between the sparse vector estimated by L-SBL and true sparse vector.
The second stage of the L-SBL layer gives an estimate of the sparse vector and error covariance matrix using (4). The output of the neural network in the first stage is used in the second stage for MAP estimation. The second stage of the L-SBL layer does not contain any trainable parameters.
We consider two different neural network architectures in the design of L-SBL layer. These two architectures are identical in single measurement vector (SMV) model. In MMV model, the second architecture vectorizes the observation matrix in to a vector and the measurement matrix is modified as . We describe the two architectures below.
L-SBL (NW-1): In this architecture, we use a dense network to estimate the signal covariance matrix from the outputs of the previous layer. The number of input nodes to the dense network is , where input nodes accept and input nodes accept the diagonal elements of the error covariance matrix . We can use a single or multi layer dense network in the estimation of the hyperparameters . In our numerical studies, we consider a single layer dense network. The details of the L-SBL (NW-1) architecture are shown in Figure 2. In L-SBL (NW1), the number of hyperparameters is . Note that, in the second stage, (4) is used for MAP estimation, which requires the inversion of a matrix of dimension .
L-SBL (NW-2): In the second architecture, the number of hyperparameters (output nodes) in the first stage of each layer is . The number of input nodes to the dense network is , with and being the inputs to the dense network. Details of the L-SBL (NW-2) architecture are shown in Figure 3. The measurement matrix is modified as , and we vectorize the observation matrix . Here, the MAP estimation stage requires the inversion of a matrix of dimension .
L-SBL (NW-2) is computationally more expensive than L-SBL (NW-1). The number of hyper parameters and the dimension of the modified measurement matrix is increased by a factor . However, the L-SBL (NW-2)
has more degrees of freedom thanL-SBL (NW-1). Therefore, the L-SBL (NW-2) can improve the signal recovery performance in scenarios where the nonzero elements among source vectors follow arbitrary patterns (see Figure 14).
Iii-B Computational Complexity
The MAP estimation is the common step in each layer of L-SBL network and in each iteration of the algorithms like SBL, M-SBL or PC- SBL. In MAP estimation stage, the inversion of an matrix is required. Matrix inversion is a computationally expensive mathematical operation, which requires floating point operations. SBL, M-SBL and PC-SBL algorithms have computationally simple expressions to estimate the signal covariance matrix from the sparse vector and the diagonal elements of the error covariance matrix at the output of the previous iteration. Therefore, the MAP estimation is the computationally expensive step in these algorithms.
In L-SBL, we use a neural network to estimate the signal covariance matrix. As long as the number of floating point operations in the neural network is less than that in the MAP estimation step, the computational complexity of the MAP estimation stage dominates. Then, one layer of L-SBL and one iteration of algorithms like SBL have same complexity. For example, if we use a single layer dense network in the estimation of the signal covariance matrix, then each layer of the L-SBL network needs multiplications and additions. As long as is the less than , computational complexity of an L-SBL layer and an iteration of the algorithms like SBL are of the same order. In the case of MMV model, the architecture L-SBL (NW2) requires a matrix inverse with dimension . Therefore, the architecture L-SBL (NW2) is computationally more expensive than L-SBL (NW1). For jointly sparse source vectors, L-SBL (NW1) is sufficient to reduce mean square error and failure rate. If the computational complexity in the MAP estimation stage dominates, then the L-SBL (NW1) and M-SBL have similar complexity. In our numerical simulations, we consider a single layer dense network in each L-SBL layer. Therefore, the computational complexity of an L-SBL layer and an iteration of the SBL algorithm are comparable.
Iii-C Training of L-SBL
Since we know the model connecting the unknown sparse vectors to the measurement vectors, we can train the L-SBL network using a synthetically generated data set. Training using synthetic data set was followed in many existing DNN based sparse signal recovery schemes [7, 3, 18]. The algorithm used to train L-SBL is presented in Algorithm 1, which is similar to the training scheme of Learned-VAMP in 
. Each layer of the L-SBL network is trained one after another using the loss function given in (18). The training of each layer has two phases. In the first phase, the trainable parameters in the previous layers are not allowed to change, and the parameters in the current layer are updated using the training data. In the second phase, we update all trainable parameters from the first layer to the current training layer. In Algorithm 1, denotes the set of all trainable parameters in layer. The total number of L-SBL layers is denoted as and denotes the total number of mini-batches used in the training of an L-SBL layer. The measurement vector is related to the sparse vector by
The loss function used in the training of the L-SBL layer is the mean square error between the true sparse vector and the current training layer output. The expression for mean square error loss function is given by
where denotes the number of training samples in a mini-batch andKerasText].
In this section, we discussed the architecture of the L-SBL network. Each layer of L-SBL comprises a hyperparameter estimation stage using a neural network and MAP estimation stage. The MAP estimation stage does not contain any trainable parameters. The presented architectures, L-SBL (NW1) and L-SBL (NW2) differ only in the MMV model and L-SBL (NW2) has more degrees of freedom compared to L-SBL (NW1). We also described the training procedure of L-SBL network using synthetically generated training data set. In the next section, we evaluate the performance of the proposed L-SBL network in different scenarios.
Iv Numerical Analysis
In this section, we numerically evaluate the performance of L-SBL and compare it against other state-of-the-art algorithms in the literature. We compare the performance of L-SBL in the recovery of usual (unstructured) sparse vectors with existing algorithms in IV-A. Later, we explore the potential of the L-SBL network to recover block sparse vectors in IV-B. In IV-C, we evaluate the capability of L-SBL to avoid retraining in the scenarios where measurement matrix changes. We also demonstrate that L-SBL can exploit multiple measurements with jointly sparse source vectors to improve the recovery performance and corresponding simulation results are presented in IV-D. Finally, we simulate the source vectors with arbitrary patterns among nonzero elements as shown in Figure 14 and the signal recovery performance is illustrated in IV-E. Numerical evaluation shows that L-SBL can utilize these source patterns in the training data set to learn a better inverse function and outperforms the existing algorithms like SBL, M-SBL, etc. In IV-F, we analyze the weight matrices learned by the L-SBL network in different scenarios like the recovery of sparse vectors, block sparse vectors, etc. In the simulation studies, we consider a single layer dense network in each L-SBL layer. Therefore, the computational complexity of an L-SBL layer and an iteration of the SBL algorithm are in the same order.
The relative mean square error (RMSE) and failure rate are the two metrics considered to compare the performance of different algorithms. Let be the signal recovered by a sparse recovery algorithm. The relative mean square error is given by
The probability of success in the support recovery of the measurement vector is computed as
where and denote the support of and , respectively, and represents the cardinality of the set. The support recovery failure rate is computed as
where denotes the indicator function.
Iv-a Sparse Signal Recovery
In the first experiment, we demonstrate the performance of L-SBL network in the recovery of sparse vectors from an under-determined set of measurements. We consider a measurement matrix with dimensions and . The elements of the measurement matrix is drawn from Gaussian distribution with zero mean and unit variance. The maximum number of nonzero elements in the sparse vector is restricted to . In the testing as well as training data, the number of nonzero elements are drawn uniformly between and , . Amplitude of the nonzero elements is chosen from with uniform probability. We consider Orthogonal Matching Pursuit (OMP), Basis Pursuit (BP), CoSamp, LISTA, sparse Bayesian learning (SBL) and L-SBL with eight layers in the comparison. Training of the L-SBL algorithm is carried out using synthetically generated data set according to the Algorithm 1. The DNN is trained using measurement vectors generated according to . After completing the training of the DNN, algorithms are compared using a testing data set. we consider a set of measurement vectors during the testing phase to evaluate the performance at each value of the sparsity level. The level is varied between and . The comparison of the failure rate and relative mean square error of different algorithms is shown in Figures 4 and 5. The plots show that L-SBL and SBL outperform the other algorithms. The relative mean square error and failure rate of the L-SBL with eight layer is less than the iterations of the SBL algorithm, indicating the computational advantage of using the DNN based approach.
Iv-B Block Sparse Signal Recovery
To evaluate block sparse signal recovery performance of L-SBL, we consider the same simulation parameters as in . The measurement matrix has dimensions and . Let be the number of blocks in the sparse vector. The procedure to determine the block sizes and block boundaries is described below.
Let be the number of non-zero elements in a sparse vector. We generate positive random numbers, such that . The block size of the block from to is chosen as and the block size of the last block is fixed as . The locations of the non-zero blocks are also chosen randomly. First we consider partitions of the vector . The size of each partition is chosen as . Then, the nonzero block with size is placed in partition of the vector with a randomly chosen starting location. In the experiment, the maximum number of blocks, is fixed as . The number of nonzero elements in the sparse vector is varied from to . The amplitudes of the nonzero elements are chosen from with uniform probability. We compare EB-SBL, PC-SBL with iterations, PC-SBL with iterations and L-SBL with eight layers. The computational complexity of L-SBL is less than that of PC-SBL with iterations. The relative mean square error is plotted as a function of the cardinality of the true solution in Figure 6, and Figure 7 shows the failure rate of different algorithms.
The plots illustrate the superior performance of the L-SBL over other algorithms like PC-SBL and EB-SBL.
Iv-C L-SBL with Arbitrary Measurement Matrix
The MAP estimation stage in an L-SBL layer does not contain any trainable parameters. Therefore, L-SBL can be trained using the measurement vectors from different measurement matrices and corresponding sparse vectors. In each training sample, the elements of the measurement matrix are randomly drawn from a particular distribution and the measurement vector is generated as , where is a sparse or block sparse vector. The measurement matrix and the measurement vector are given as the inputs to L-SBL network. In order to illustrate the performance of block sparse signal recovery with arbitrary measurement matrix, randomly drawn measurement matrices , measurement vectors and sparse vectors are collected in the training data. In this experiment, we consider matrices with elements are drawn from zero mean, unit variance Gaussian distribution. The dimensions of the measurement matrix is selected as and . The failure rate and relative mean square error of PC-SBL is compared with L-SBL in Figures 8 and 9.
The Figures 8 and 9 show that the RMSE and failure rate of the L-SBL trained for arbitrary measurement matrix is slightly higher than the L-SBL network for a specific (randomly selected) measurement matrix. Nonetheless, the L-SBL network trained for arbitrary measurement matrix outperforms PC-SBL algorithm. This indicates that when L-SBL is trained with a single measurement matrix , even though the is not used in the DNN, the weights of the DNN do adapt to the structure of , yielding better recovery performance.
Iv-D L-SBL with MMV Model
In this subsection, we illustrate the potential of L-SBL to exploit multiple measurements for reducing mean square error and failure rate. First we compare the M-SBL algorithm with L-SBL for sparse signal recovery using the MMV model. The source vectors are jointly sparse and the nonzero elements are independent and identically distributed. The dimension of the measurement matrix is chosen as and . The number of measurements is selected as . The M-SBL algorithm is compared with L-SBL in Figures 10 and 11. The plots show that L-SBL network with 11-layers outperforms the M-SBL algorithm with iterations. The 11-layer L-SBL network shows comparable performance with iterations of the SBL algorithm.
In Figures 12 and 13, we evaluate the performance of L-SBL to recover block sparse vectors using multiple measurements. In this comparison, we modified the original PC-SBL to utilize multiple measurement vectors. The computational complexity of the PC-SBL with iterations is more than the L-SBL with six layers. The results show that once again, L-SBL with six layers has reduced mean square error as well as failure rate than PC-SBL with iterations.
Iv-E L-SBL for source vectors with arbitrary pattern
Here we explore the performance of the L-SBL network, when the source vectors in multiple measurements are not jointly sparse. Consider the patterns of nonzero elements among source vectors shown in Figure 14. These types of source patterns may arise, for example, the direction of arrival (DoA) estimation from multiple measurements. Such patterns can exist in the scenarios where fast moving as well as stationary targets exist together in the radar’s field of view. During the training of L-SBL network, training data set is generated according to the pattern shown in Figure 14. The trained L-SBL model learns an inverse function to recover sparse vectors using the patterns existing in the source vectors. We compare the two architectures presented in section III with SBL and M-SBL algorithms. The second architecture L-SBL (NW2) has higher degrees of freedom due to the increased dimensions of the modified measurement matrix and signal covariance matrix . We consider a measurement matrix with dimensions and in our simulation. The number of measurements is chosen as . The failure rate and relative mean square error of SBL and M-SBL algorithms are compared with L-SBL in Figure 15 and 16.
The L-SBL (NW2) shows superior performance over other algorithms and L-SBL (NW1). Since the source vectors are not jointly sparse, performance of the M-SBL algorithm is poor and the single measurement based SBL outperforms M-SBL.
Iv-F Weight matrices learned by L-SBL
We now present the weight matrices learned by the DNN in the training phase, which yields interesting insights into its superior performance. In each iteration of the SBL algorithm, the estimate of hyperparameters from the previous iteration outputs and can be implemented using a matrix multiplication given by
In numerical simulations, we used a single layer dense network for the estimation of hyperparameters . A single layer dense network learns a weight matrix
and a bias vectorfrom the training data. The weight matrices learned by the L-SBL network are not the same as (24). The weight matrices implemented in two different L-SBL layers for the recovery of a sparse vector from a single measurement vector are shown in Figure 17
, which indicates that different functions are implemented in different layers of the L-SBL network. Recall that the nonzero elements of the sparse vectors are drawn from uniform distribution which is different from the hierarchical Gaussian prior assumed by SBL. Such deviation from the assumed signal model may lead to the improved performance of L-SBL network over SBL. The weight matrices learned for the recovery of block sparse vectors are also different from the weight matrices in single sparse vector recovery problem. The weight matrices of two L-SBL layers in the block sparse signal recovery problem are shown in Figure18. The learned weight matrices introduce a coupling between the adjacent elements of the sparse vector . Moreover, the functions implemented in different layers of L-SBL are not the same. The weight matrices of L-SBL during the testing with MMV model are shown in Figure 19, which illustrate that L-SBL exploits the joint sparsity among multiple source vectors in the estimation of hyperparameters. Finally, the weight matrices implemented during the training of samples with arbitrary patterns among source vectors are shown in Figure 19. In this case, the off-diagonal elements of the second half of the weight matrix are also nonzero. This indicates that L-SBL utilizes the patterns of the nonzero elements among source vectors to improve the recovery performance. Note that, in (24), to estimate the hyperparameters , the diagonal elements of the error covariance matrix and the sparse vector estimate from previous iteration are combined using equal weights. However, in the learned weight matrix, L-SBL network gives more importance to the sparse vector estimate from the previous layer for the estimation of hyperparameters.
In this section, we demonstrated the superior performance of L-SBL over other algorithms in the recovery of sparse vectors from single or multiple measurements. The L-SBL network can recover block sparse vectors without the knowledge of block boundaries. Moreover, the weight matrices learned by the L-SBL exploits the arbitrary patterns of the nonzero elements among source vectors to reduce mean square error and failure rate. In the next section, we consider the localization of an extended target using a MIMO radar. The reflected signal from an extended target can be modeled as a block sparse vector and the introduced L-SBL network can be used in the recovery of the target signature.
V Extended target Detection using L-SBL
The detection of an extended target using radar or sonar can be modeled as a block sparse signal recovery problem. In , the variational garrote approach is extended for the recovery of a block sparse vector, where different block sizes are considered in different range bins. Since the proposed block sparse signal recovery scheme in  assumes prior knowledge about block partitions, the algorithm is sensitive to the boundary mismatches. We demonstrate that the L-SBL can detect the extended target from multiple radar sweeps, without knowledge of the block sizes or boundaries.
V-a Signal model
Consider a narrow band MIMO radar system with transmitting antennas, receiving antennas and radar sweeps. Let be the waveform emitted by the transmitting antenna. The set of doppler shifted waveforms from the antennas for the doppler bin collected in the matrix is,
where is the doppler shifted waveform of the transmitting antenna and is given by
The received sensor signal in radar sweep is given by
where denotes the number of angular bins, denotes the number of range bins and denotes the number of doppler bins. represents the steering vector of the receiver array towards the look angle , denotes the steering vector of the transmitter array towards the look angle and is the scattering coefficient of the extended target. Further, denotes the zero appended waveform matrix, which is given by
and is the shifting matrix that accounts for the different propagation times of the returns from adjacent bins at the receiving array.
The received signal in radar sweep given by (27) can be rewritten as
where , , and . The new measurement matrix is given by
where and . We can arrange the measurements of different radar sweeps as columns of the matrix . Similarly, let be the matrix with block sparse vectors as its columns. Then, the signal model can be expressed as
where represents the matrix with noise vectors . Now, many software packages for deep neural network implementation do not support the complex data type. So we consider an equivalent model in real vector space for extended target detection. We can express (32) as follows:
where the block sparse matrix is . The noise matrix contains independent and identically distributed Gaussian random variables with zero mean and variance . The measurement vector in the real vector space is given by , and the measurement matrix is related to as
where and denote the real part and imaginary part of the matrix. The dimensions of measurement matrix in real vector space is related to the dimensions of as and .
V-B Numerical Results
In numerical simulations, we consider a MIMO radar system with number of transmitter antennas . Number of receiver antennas . Ten angular bins are considered between to . The antenna spacing in the receiver as well as transmitter array is chosen as , where denotes the wavelength corresponding to the signal transmitted by the radar. The number of doppler bins is chosen as one and the transmitted waveform is selected as the Hadamard code of length . The number of radar sweeps is chosen as two, the scattering coefficients of the extended target is drawn from the standard complex Gaussian distribution. We use a synthetically generated training data set according to (33) to train the L-SBL network. The training procedure of the L-SBL network is as described in Algorithm 1. The SNR of the received block sparse vector is also chosen randomly between dB and dB in each training sample. The expression of the SNR in the testing as well as training data is,
A realization of the target detected by L-SBL in 30 dB SNR is shown in Figure 21.
We compare the performance of L-SBL with PC-SBL. We also compare with the Minimum Mean Square Estimator (MMSE) with known support, which has the least mean square error. In Figure 22, we compare the relative mean square error (RMSE) of different algorithms against SNR. The plot shows that the relative mean square error of L-SBL with layers is less than the PC-SBL algorithm with iterations.
In this paper, we presented a new deep learning architecture named as Learned-SBL for sparse signal processing. The L-SBL network can recover sparse or block sparse vectors depends on the training of the network. The L-SBL utilizes multiple measurements to reduce failure rate and mean square error. The arbitrary patterns of the non-zero elements among multiple source vectors are also exploited by L-SBL to enhance the performance. The introduced learned L-SBL avoids the retraining of the network in the applications where measurement matrix changes with time. We compared L-SBL with other algorithms and showed the application in the detection of an extended target using a MIMO radar.
-  (2008) Model-based compressive sensing. IEEE Trans. Inf. Theory. Cited by: §I.
-  (2017) Compressed sensing using generative models. In Proc. ICML, pp. 537–546. Cited by: §I.
-  (2017) AMP-inspired deep networks for sparse linear inverse problems. IEEE Trans. Signal Process. 65 (16), pp. 4293–4308. Cited by: §I, §III-C.
-  (2018) Block-sparsity-based multiuser detection for uplink grant-free NOMA. IEEE Trans. Wireless Commun. 17 (12), pp. 7894–7909. Cited by: §I.
-  (2010) Block-sparse signals: uncertainty relations and efficient recovery. IEEE Trans. Signal Process. 58 (6), pp. 3042–3054. Cited by: §I.
-  (2015) Pattern-coupled sparse Bayesian learning for recovery of block-sparse signals. IEEE Trans. Signal Process. 63 (2), pp. 360–372. Cited by: §I, §II-C, §IV-B.
-  (2010) Learning fast approximations of sparse coding. In Proc. ICML, pp. 399–406. Cited by: §I, §III-C.
-  (2017) From Bayesian sparsity to gated recurrent nets. In Advances in Neural Information Processing Systems, pp. 5554–5564. Cited by: §I.
-  (2014) Deep unfolding: model-bsed inspiration of novel deep architectures. arXiv preprint arXiv:1409.2574. Cited by: §I.
-  (2016) Sparse signal recovery for binary compressed sensing by majority voting neural networks. arXiv preprint arXiv:1610.09463. Cited by: §I.
-  (2018) A neural architecture for bayesian compressive sensing over the simplex via laplace techniques. IEEE Trans. Signal Process. 66 (22), pp. 6002–6015. Cited by: §I.
-  (2017) Learned D-AMP: principled neural network based compressive image recovery. In Advances in Neural Information Processing Systems, pp. 1772–1783. Cited by: §I.
-  (2018) Channel estimation for massive MIMO communication system using deep neural network. arXiv preprint arXiv:1806.09126. Cited by: §I.
-  (2017) Learning to invert: signal recovery via deep convolutional networks. In Proc. ICASSP, pp. 2272–2276. Cited by: §I.
-  (2015) A deep learning approach to structured signal recovery. In Allerton Conference on Communication, Control, and Computing, pp. 1336–1343. Cited by: §I.
-  (2016) Distributed compressive sensing: a deep learning approach. IEEE Trans. Signal Process. 64 (17), pp. 4504–4518. Cited by: §I.
-  (2016) Extended target localization using the variational Garrote. In Proc. SPAWC, pp. 1–6. Cited by: §V.
-  (2017) Learning to optimize: training deep neural networks for wireless resource management. In Proc. SPAWC, pp. 1–6. Cited by: §I, §III-C.
Sparse Bayesian learning and the relevance vector machine.
Journal of Machine Learning Research1, pp. 211–244. Cited by: §II-A.
-  (2008) Probing the pareto frontier for basis pursuit solutions. SIAM Journal on Scientific Computing 31 (2), pp. 890–912. Cited by: §I.
-  (2018) Alternative to extended block sparse Bayesian learning and its relation to pattern-coupled sparse Bayesian learning. IEEE Trans. Signal Process. 66 (10), pp. 2759–2771. Cited by: §I.
-  (2004) Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 52 (8), pp. 2153–2164. Cited by: §II-A.
-  (2007) An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 55 (7), pp. 3704–3716. Cited by: §II-B.
-  (2016) Maximal sparsity with deep networks?. In Advances in Neural Information Processing Systems, pp. 4340–4348. Cited by: §I.
-  (2018) Cascade deep networks for sparse linear inverse problems. In Proc. ICPR, pp. 812–817. Cited by: §I.
-  (2013) Extension of SBL algorithms for the recovery of block sparse signals with intra-block correlation. IEEE Trans. Signal Process. 61 (8), pp. 2009–2015. Cited by: §I.