I Introduction
Compressive sensing (CS) is a new image/signal acquisition framework acquiring only a few linear measurements [1, 2] with random measurement matrices. Such framework has the potentials of significantly improving the imaging speed and sensor energy efficiency in real applications. Based on CS, several new imaging systems, including singlepixel camera [3], compressive spectral imaging system [4], highspeed video camera [5], and fast Magnetic Resonance Imaging (MRI) system [6]
have been developed. Perfect reconstruction from the linear measurements is guaranteed if the following two conditions are met, i.e., the sensing matrices have the restricted isometry property (RIP) and the images have sparse representation with respect to a dictionary. These two conditions can be easily satisfied as random Gaussian matrices have the RIP with high probabilities
[1, 2] and natural images have sparse representation under many offtheshelf or learned dictionaries (e.g., wavelets, learned dictionaries [7]). However, in practice the promise of CS is often offset by challenges relating to two conditions. First, it is difficult or even impossible to implement the random sensing matrix for a large image. Second, the sparsitybased CS reconstruction algorithms are very slow to converge for obtaining good estimates of the original images.
For a given image , the CS measurement of can be expressed as , where is the sensing matrix and (). In practice, as the dimension of natural images are often very high, the memory requirement for storing the matrix is huge. For example, for a moderate image of size and , the dimension of is . The computational complexity of the CS reconstruction with such large sensing matrix is also prohibitively high. To avoid these difficulties, blockbased CS (BCS) methods have been proposed [8, 9], where an image is divided into many nonoverlapping blocks and each block is sensed individually. As such, the sensing matrices become much smaller and images can be efficiently measured. However, as the blocks are sensed and reconstructed individually, the BCS methods lead to serious blocking artifacts. To reduce the blocking artifacts, postprocessing is often required to improve the visual quality.
According to the CS theory [1, 2]
, the original images can be wellreconstructed by exploiting the sparsity prior of natural images. However, the sparsitybased CS methods recover the original images by solving an optimization algorithm, which is very slow to converge. Thus, the optimizationbased CS methods cannot be used for realtime applications. Recently, inspired by the great successes of deep neural network (DNN) for computer vision tasks
[10, 11, 12], DNNbased CS reconstruction methods have also been proposed [13, 14, 15]. With DNN, both the linear sensing and nonlinear reconstruction can be performed by a single neural network. Through endtoend training, both the sensing matrix and the reconstruction method can be jointly optimized, and the speed is usually hundredstimes faster than optimizationbased methods. However, to the best of our knowledge, current DNN based methods are all blockbased, i.e., image blocks are sensed and reconstructed individually. Blocking artifacts can also be observed in the reconstructed images by the DNNbased methods [13, 14].In this paper, we propose a novel convolutional compressive sensing (ConvCS) framework based on deep convolutional neural network (DCNN). In the proposed ConvCS network, the first layer senses an input image by convolving the whole image with a set of random filters, followed by subsampling. The advantage of the proposed ConvCS is that the whole image can be efficiently sensed with a set of small filters that are easy to store, and effectively reconstructed without introducing blocking artifacts. The remaining layers of the proposed ConvCS network perform the nonlinear reconstruction of the whole image from the measurements. To design the reconstruction network, the domain knowledge of the sparsitybased CS reconstruction is incorporated, leading to a novel CNN for image reconstruction. By endtoend training, both the convolutional sensing filters and the reconstruction CNN are jointly optimized. Experimental results show that the proposed method substantially outperforms current stateoftheart CS methods in terms of PSNR and visual quality.
Ii Related Works
Iia Background of CS
Instead of sampling the entire signal, the CS theory samples a signal by taking only linear measurements, i.e., , , where , is the sensing matrix and is the measurement noise. Since , recovering from is generally an illposed inverse problem. However, the CS theory guarantees perfect reconstruction of if is sparse in some sparsifying spaces and holds the RIP. It has been proven that the Gaussian random matrices have the RIP with very high probabilities. Standard CS methods recover by solving a minimization problem,
(1) 
where D is the dictionary and are the sparse codes of . After estimating the sparse codes , can be reconstructed as . It has been proven in [1, 2] that can be faithfully recovered from measurements, where denotes the number of nonzero coefficients of . The minimization problem can be solved by many optimization algorithms [16, 17]. However, as mentioned above these methods are very slow to converge.
IiB Blockbased image CS
When applying CS to images, the image can be sensed by representing it into a vector and measured as
. However, for images of high dimensionality, the sensing matrix becomes very large and it is impossible to store it and compute with such large matrix . To avoid such difficulties, blockbased CS (BCS) methods have been proposed [8, 9]. In these methods, the input image is divided into many nonoverlapped blocks and each block is sensed independently using a much smaller matrix . The full image is recovered by placing back the reconstructed blocks, followed by fullimage smoothing. To reduce the blocking artifacts, fullimage iterative shrinkage algorithm was proposed [9], and improvements can be achieved by using more advanced transforms, such as contourlets and dualtree discrete wavelet transforms [8]. In addition to the sparsity prior, modelbased CS recovery algorithms have also been developed to exploit the highorder dependencies between the wavelet coefficients [18], leading to better performance. The effective nonlocal selfsimilarity prior has also been integrated into the objective function through nonlocal lowrank regularization [19].IiC Structured CS for images
To overcome the drawback of the BCS, structured CS operators [20, 21, 22] have been proposed. The convolutional CS methods, which performs the sensing by first convoluting a signal with a random filter and then subsampling, have been proposed in [20, 22]. These CS methods are easy to implement and have many potential applications, such as Fourier optics [20], Radar imaging [21] and coded aperture imaging [23]. In addition to the random filter, deterministic filter that is more convenient to implement has also been proposed in [22]. Though the convolutionbased CS are more easy to implement, the use of only one random filter makes the reconstruction problem more difficult [20]. Also, all these convolutionbased CS use iterative algorithms to reconstruct the original images, which are very slow to converge.
IiD Deep learning based CS for images
Recently, inspired by the successes of the deep neural networks, noniterative CS reconstruction methods have been proposed [13, 14, 15]. In [13], a stacked autoencoder denoising network was developed for image CS. Similarly, a fullconnected neural network has been proposed in [15]. Convolutional neural network has also been proposed for this task, where a BM3D denoising [24] stage is adopted to further improve the reconstruction performance [14]. Through endtoend training, both the sensing matrices and the reconstruction network can be jointly optimized for better performance. However, all those methods perform CS and reconstruction on image blocks, leading to limited reconstruction performance.
In this paper, we propose a new convolutional compressive sensing (denoted as ConvCS) framework using deep convolutional neural network (DCNN), where the first layer implements the sensing by convolving the input image with a set of random filters followed by subsampling. The remaining layers reconstruct the input image from the linear measurements. Furthermore, inspired from the sparsitybased reconstruction model, a novel CNN containing two branches is proposed for CS reconstruction. By performing sensing and reconstruction on the whole image, the proposed method significantly outperforms the previous CS methods. Different from previous convolutionbased CS methods [20, 22], both the sensing filters and the reconstruction algorithm can be jointly optimized. Experimental results show that the proposed method outperforms existing stateoftheart CS methods by a large margin.
Iii Proposed Convolutional Compressive Sensing using Deep Learning
Iiia Proposed convolutional compressive sensing
Unlike existing BCS and convolutional CS, we propose to sense an image by convolving it with a set of random filters, followed by spatial subsampling of the convolved images. Specifically, for a given image , we convolve it with a set of random filters of size , , to generate the CS measurements. Mathematically, the sensing matrix can be expressed as
(2) 
where is set to be a sparse matrix such that is equivalent to convolving with filter , and is a subsampling matrix. Then, the proposed convolutional CS (ConvCS) for can be formulated as , , where .
As convolution can also be implemented with matrixvector multiplication, the proposed ConvCS is equivalent to the CS process that first extracts blocks of size with sliding step and then measures the blocks with the sensing matrix, whose th rows are composed with the vectorized filter coefficients of . Thus, the proposed ConvCS matrix of Eq. (2) still holds the advantages of the random Gaussian matrix for CS (i.e., the RIP). However, the proposed ConvCS is clearly distinct from the previous BCS methods in two aspects. First, the proposed ConvCS for images of large dimensions is much easier to implement. Second, the convolutional nature of the proposed ConvCS makes the joint optimization of the sensing filters and reconstruction of the whole image much more effective, without introducing any blocking artifacts.
The proposed ConvCS can be easily implemented using a convolutional neural network, as shown in Fig. 1 (a). The first layer convolves the input image with a set of random filters of size
with stride
. The measurements can be obtained by representing the obtained feature maps into a dimensional vector. Note that nonlinearity is not involved in the CS process. More convolutional layers with nonlinear activity function could be added into the ConvCS process, which may lead to better performance. However, for simplicity, here we only use one linear layer to obtain the measurements. As shown in Sec.
IV, the simple ConvCS can already lead to excellent CS recovery performances.Essentially, the ConvCS encodes the visual information of images and plays a similar role as autoencoders. In the context of CS, the stacked denoising autoencoders (SDA)
[13] have been proposed to encode the images. However, the SDA method performs CS encoding at image block levels, resulting in serious blocking artifacts.IiiB Sparse model inspired DCNN for CS reconstruction
After obtaining the CS measurements, we aim to recover the original image from the measurements. In recent years, DCNN has shown very promising performances for many lowlevel image processing tasks, e.g., image superresolution
[25, 26]. However, the problem of reconstructing images from the ConvCS measurements is different from previous image SR problem, and existing reconstruction network can not be applied directly for this task. To facilitate the design of the CS reconstruction network, we propose to incorporate the domain knowledge of the sparsitybased CS reconstruction. Specifically, we first propose the following analysis sparse representation model for CS reconstruction,(3) 
where is the analysis filter, denotes the 2D convolution, and denotes a regularization term imposed on sparse codes . Classic sparsity enforcing regularizers, e.g., (), as well as the nonnegative indicator function
(as inspired by the ReLU function) can be adopted. Solving Eq. (
3) amounts to alternatively solving two subproblems, i.e.,(4) 
where . Both subproblems can be easily optimized. With a fixed estimate obtained at th iteration, can be solved in closedform for several commonly used sparsity regularizer, as
(5) 
for norm sparse regularizer and nonnegative regularizer, respectively, where denotes the softthresholding with threshold . The subproblem is a quadratic optimization problem and can be solved in closedform. However, to avoid large matrix inversion, we prefer to solve it via a gradient descent method. With a fixed estimate of , we iteratively update as
(6) 
where is the sparse matrix such that is equivalent to convolving with , and is the predefined constant. By alternatively updating and , the iterative process will converge. However, the convergence speed is very slow.
In this paper, we propose to convert the alternative update of and into a deep network. For simplicity, we rewrite Eq. (6) as
(7) 
where denotes the process that first senses with and then reconstructs from the CS measurements by backprojection. For simplicity, we let and . Then, Eq. (7) can be approximated as
(8) 
where , , denotes the initial reconstruction of from , and denotes the reconstructed from .
Inspired from the alternative update of and , we propose a novel DCNN for CS reconstruction. As shown in Fig. 1 (b), the proposed reconstruction network contains two branches. The first branch implements a conventional CNN for generating the feature maps . The first branch taking the measurements vector as input back projects into the feature maps, where all entries are zero except the sampled set of pixels (as marked red shown in Fig. 1 (b)). The first layer uses kernels of size and generates feature maps, while the remaining layers use kernels of size and generate feature maps. The ReLU function is applied following convolution. Compared to the directly implementation of Eq. (5), the deep CNN is more powerful in learning the representation of the original image . The first CNN branch can also be regarded as a nonlinear mapping function used to accurately predict the sparse codes . The second branch recursively reconstructs the image based on the feature maps from the first branch and the previously reconstructed images and , which mimics the computations of Eqs. (7) and (8). In each layer of the second branch, the feature map from the CNN branch is fed into a convolutional layer to produce an image, which is further added with previously reconstructed and for an updated estimate . The kernel size used in the reconstruction branch is also .
Iv Experimental results
Iva Training details
To achieve better performance, the sensing layers and the remaining reconstruction layers are jointly trained, and thus the sensing filters and the reconstruction network can be jointly optimized. Let ConvCSNet denote the proposed network performing convolutional CS and reconstruction. To verify the effectiveness of the proposed twobranch reconstruction network, we also implement a variant of the ConvCSNet (denoted as ConvCSNetbaseline), which uses only the first branch of the reconstruction network shown in Fig. 2
(b) for CS reconstruction. To train the proposed networks, we collected natural images from the ImageNet dataset
[27] and extracted the central part of each image. The extracted patches are converted into grayscale and augmented via horizontal and vertical flips and rotations. Finally, we obtained a training set containing image patches. We empirically set the parameters of the convolutional sensing layer as: for rate ; for rate , for rate , and for rate , where and denotes the filter size, number of filters and the convolutional stride, respectively. The parameters of the reconstruction layers are the same for different measurement rates, except the first layer of the reconstruction part of the ConvCSNet. In that layer, we use corresponding filter size of used in the sensing layer.The proposed network is trained using the loss function, as , where denotes all the network parameters (including the sensing filters) and is the total number of training patches. The proposed network is trained with ADAM optimizer [28] by setting , and . The minibatch size is set to . We initialize the learning rate as and halved at every
minibatch updates. We implemented the proposed network with TensorFlow framework and training them using 4 NVIDIA 1080Ti GPUs. It takes one day to train ConvCSNet. Currently, we trained one model for each sensing rates. In future, a general reconstruction network may be trained for different measurement rates.
IvB Comparison with stateoftheart methods
Images  Noiseless  Noisy  
Ratio  0.05  0.1  0.2  0.3  0.05  0.1  0.2  0.3  
butterfly 










parthenon 










starfish 










flower 










girl 










leaves 










raccoon 










Average 









We compare the proposed ConvCSNet method with two iterative CS methods, including the total variation method (denoted as TV [29]), the denoisingbased approximate message passing (DAMP) method [30], and one recently developed fullyconnected network based CS method (denoted as FCNCS) [15]. Note that both TV [29] and DAMP methods [30] conduct compressive sensing on the whole image, using very large measurement matrices. For an input image of size and measurement rate , the measurement matrix is of size , requiring about memory for storing it in Matlab platform. The computational complexity of the iterative CS reconstruction using such large sensing matrices also become very slow. However, the advantage of using such large measurement matrices is that the reconstruction quality can be much improved. Also note that the DAMP method [30] uses the wellknown BM3D denoising method [24] in its iterative reconstruction process. As the BM3D denoising method is very effective in suppressing noise and artifacts, the DAMP method achieves the stateoftheart CS performance. The deep learning based FCNCS method [15] performing the sensing and reconstruction based on blocks. As the authors of FCNCS method only provided the test code in their website, we reimplemented the training algorithm of FCNCS method and trained the network with our training dataset. As we use larger training dataset, the performance of FCNCS method is much improved, compared with those obtained using the model provided by the authors of [15].
We have also tried to compare our method with the ReconNet of [14]. However, the results obtained using the code downloaded from their website are worse than those reported in their paper. We think that this may be caused by different parameters settings, and tuning the parameters of the method for better results is out of the scope of this paper. Hence, we didn’t include the ReconNet [14] into our comparison study. All the codes of the competing methods are downloaded from authors’ websites. We generate the measurements at four measurement rates , and
. To verify the robustness of the reconstruction algorithms to the measurement noise, we also conducted CS reconstruction using noisy measurements. To this end, Gaussian noise of standard deviation of
is added to the measurements. A set of natural images of size is used as test images, as shown in Fig. 2. The wellknown Berkeley segmentation dataset containing 100 natural images (denoted as BSD100) is also used to verify the performances of the test methods. For the BSD100 dataset, we extract the central part of each image as test images. Note that all the test images are excluded from the training dataset.Noiseless  Noisy  
Ratio  0.05  0.1  0.2  0.3  0.05  0.1  0.2  0.3 
TV[29]  24.09  26.00  28.46  30.45  21.96  22.95  24.17  25.16 
FCNCS[15]  25.92  27.54  29.85  31.57  25.89  27.29  29.03  30.74 
DAMP[30]  25.48  27.66  30.65  33.12  22.63  23.72  24.95  25.81 
ConvCSNetbaseline  26.25  28.12  30.36  33.08  25.80  27.66  28.96  31.15 
ConvCSNet  26.47  28.19  31.03  33.49  26.22  28.09  29.63  32.25 
Methods  TV[29]  FCNCS[15]  DAMP[30]  ConvCSNet 
Time  82.47  0.41  60.79  0.08 
Table I reports the PSNR results of the test methods on the set of test images shown in Fig. 2. It can be seen that the proposed ConvCSNet outperforms the ConvCSNetbaseline method by a large margin, demonstrating the effectiveness of the proposed CS reconstruction network. When compared with other competing methods, the proposed ConvCSNet method also performs better than the other methods on all measurement rates. It significantly outperforms the blockbased FCNCS method on all measurement rates. The DAMP method performs very well for high measurement rates (e.g., 0.2 and 0.3). The proposed ConvCSNet also performs much better than DAMP method on all measurement rates. For noisy cases, the proposed ConvCSNet also significantly outperforms the other methods by large margins. Table II shows the average PSNR results of the test methods on the BSD100 dataset. From Table II, we can see that the proposed ConvCSNet also outperforms all the other competing methods, for both noiseless and noisy cases.
Figs. 3 and 4 show parts of the reconstructed images by the test methods. Clearly, the visual quality of the images reconstructed by the proposed ConvCSNet method is significantly better than other competing methods. The proposed method can generate visually pleasant images at measurement rate , while other methods produce images with severe visual artifacts. For more visual comparisons, please refer to the supplementary material.
Regarding the computational complexity of the test methods, we report the running time of the test methods on a test image of size for the noiseless case, as shown in Table III. For TV [29] and DAMP [30] methods, the computer with Intel i76700 3.4GHz and G memory is used to run these algorithms provided by the authors. For the FCNCS [15] and the proposed methods, a Nvidia GTX 1080Ti GPU is used to compute the CS reconstructions. From Table III, it can be seen that the time taken by the proposed ConvCSNet for reconstructing a image is about times faster than the TV method [29] and times faster than the DAMP method [30]. Note that the high computational complexity of TV and DAMP is not only due to slow convergence, but also the use of huge measurement matrices. When compared to the blockbased FCNCS method, our method is also about times faster.
V Conclusions
Compressive sensing is a new image/signal acquisition paradigm, which has potentials in highspeed and energy efficient imaging applications. However, a practical issue in the use of CS theory is the huge memory and computation requirements for sensing a whole image. While performing CS on image block level leads to efficient measurement, it will significantly decrease the reconstruction performance. In this paper, we propose a novel convolutional compressive sensing (ConvCS) method based on deep learning. In the proposed ConvCS network, the first layer conducts sensing of the whole image using a set of convolutional filters and the remaining layers performs the reconstruction of the whole image. For better CS reconstruction, a novel twobranch convolutional neural network is proposed. Through endtoend training, both the sensing filters and the reconstruction network can be jointly optimized. Experimental results show that the proposed method significantly outperforms existing iterative and deep learning based CS methods.
References
 [1] E. J. Candès, J. K. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006.
 [2] D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
 [3] M. Duarte, M. Davenport, D. Takbar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Singlepixel imaging via compressive sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008.
 [4] M. E. Gehm, R. John, D. Brady, R. Willett, and T. J. Schulz, “Singleshot compressive spectral imaging with a dualdisperser architecture,” Optics Express, vol. 15, no. 21, pp. 14 013–14 027, 2007.
 [5] Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned overcomplete dictionary,” in Proc. of the IEEE ICCV, 2011, pp. 287–294.
 [6] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing mri,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 72–82, 2008.
 [7] M. Elad and M. Aharon, “Image denoising via sparse and redundant representation over learned dictionaries,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, 2006.
 [8] J. E. Fowler, S. Mun, and E. W. Tramel, “Blockbased compressed sensing of images and video,” Foundations and Trends in Signal Processing, vol. 4, no. 4, pp. 297–416, 2012.
 [9] S. Mun and J. E. Fowler, “Block compressed sensing of images using directional transforms,” in Proc. of the ICIP, 2009, pp. 3021–3024.
 [10] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of the NIPS, 2012, pp. 1097–1105.
 [11] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proc. of the CVPR, 2014, pp. 580–587.
 [12] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. of the CVPR, 2015, pp. 3431–3440.
 [13] A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in Proc. of Annual Allerton Conf. on Comm., Control, and Computing, 2015, pp. 1336–1343.
 [14] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: noniterative reconstruction of images from compressively sensed random measurements,” in Proc. of CVPR, 2016, pp. 449–458.
 [15] A. Adler, D. Boublil, M. Elad, and M. Zibulevsky, “A deep learning approach to blockbased compressed sensing of images,” in Proc. of ICASSP, 2017.
 [16] I. Daubechies, M. Defriese, and C. DeMol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Commun. Pure Appl. Math., vol. 57, no. 11, pp. 1413–1457, 2004.
 [17] M. Zibulevsky and M. Elad, “l1l2 optimization in signal and image processing,” IEEE Signal Processing Magazine, vol. 27, no. 3, pp. 76–88, 2010.
 [18] R. Baraniuk, V. Cevher, M. Duarte, and C. Hegde, “Modelbased compressive sensing,” IEEE Trans. on Information Theory, vol. 56, no. 4, pp. 1982–2001, 2010.
 [19] W. Dong, G. Shi, X. Li, Y. Ma, and F. Huang, “Compressive sensing via nonlocal lowrank regularization,” IEEE Trans. on Image Processing, vol. 23, no. 8, pp. 3618–3632, 2014.
 [20] J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sci., vol. 2, no. 4, pp. 1098–1128, 2009.
 [21] J. Y. W. Yin, S. Morgan and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” in Proc. of VCIP, 2010.
 [22] K. Li, L. Gan, and C. Ling, “Convolutional compressed sensing using deterministic sequences,” IEEE Trans. on Signal Processing, vol. 61, no. 3, pp. 740–752, 2013.
 [23] R. Marcia, Z. Harmany, and R. Willett, “Compressive coded aperture imaging,” in Proc. of SPIE Electron. Imag., 2009.
 [24] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3d transformdomain collaborative filtering,” IEEE Trans. on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
 [25] C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image superresolution,” in Proc. of ECCV, 2014.
 [26] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image superresolution using very deep convolutional networks,” in Proc. of CVPR, 2016, pp. 1646–1654.
 [27] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. FeiFei, “Imagenet large scale visual recognition challenge,” Int. J. Computer Vision, vol. 56, no. 3, pp. 507–530, 2013.
 [28] D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in Proc. of ICLR, 2014.
 [29] C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented lagrangian method with applications to total variation minimization,” Computational Optimization and Applications, vol. 56, no. 3, pp. 507–530, 2013.
 [30] C. A. Metzler, A. Maleki, and R. G. Baraniuk, “From denoising to compressed sensing,” IEEE Trans. on Information Thoery, vol. 62, no. 9, pp. 5117–5144, 2016.
Comments
There are no comments yet.