The Power of Triply Complementary Priors for Image Compressive Sensing

05/16/2020 ∙ by Zhiyuan Zha, et al. ∙ 0

Recent works that utilized deep models have achieved superior results in various image restoration applications. Such approach is typically supervised which requires a corpus of training images with distribution similar to the images to be recovered. On the other hand, the shallow methods which are usually unsupervised remain promising performance in many inverse problems, , image compressive sensing (CS), as they can effectively leverage non-local self-similarity priors of natural images. However, most of such methods are patch-based leading to the restored images with various ringing artifacts due to naive patch aggregation. Using either approach alone usually limits performance and generalizability in image restoration tasks. In this paper, we propose a joint low-rank and deep (LRD) image model, which contains a pair of triply complementary priors, namely external and internal, deep and shallow, and local and non-local priors. We then propose a novel hybrid plug-and-play (H-PnP) framework based on the LRD model for image CS. To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-PnP based image CS problem. Extensive experimental results demonstrate that the proposed H-PnP algorithm significantly outperforms the state-of-the-art techniques for image CS recovery such as SCSNet and WNNM.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Compressive sensing (CS) has attracted considerable interests for many researchers in signal and image processing communities [1, 2]. The most attractive aspect of CS is that the sampling and compression are conducted simultaneously, and almost all computational cost is derived from the decoder stage, and therefore, leading to a low computational cost in the encoder stage [3]. Due to the unique advantages of CS, it has been widely used in many important applications, such as magnetic resonance imaging [4] and single-pixel camera [5].

Figure 1: CS results produced by our proposed H-PnP and state-of-the-art algorithms, when the CS ratio is 10%. (a) The original image; (b) WNNM [6]; (c) ReconNet [7]; (d) ISTA-Net [3]; (e) SCSNet [8]; (f) the restored image by our proposed H-PnP, where the ringing artifacts have been eliminated significantly and sharp details are recovered.

Image CS methods can be classified, in general, into two categories: shallow model-based methods

[9, 10, 11, 12, 13, 14]

and deep learning based methods

[7, 3, 8, 15, 16, 17]. One of the classic shallow approaches is based on image sparsity model, assuming that each image local patch can be encoded as a linear combination of basis elements [18]. Recent works exploited image non-local self-similarity (NSS) prior [19], by clustering similar patches into groups and modeling them by structural sparsity [11, 19, 13] or low-rankness [6, 14, 20]. However, most of the shallow methods are patch-based, which inevitably lead to recovered images with ringing or blocky artifacts due to naive patch aggregation (See an example in Fig. 1 (b)).

On the other hand, many recent works applied various deep neural networks for image restoration tasks

[21, 7, 3, 8, 15, 16, 17, 22]

. Such approaches have demonstrated great potential to learn image properties from training dataset with an end-to-end approach. Popular neural networks structures, such as convolutional neural networks (CNN)

[21]

and recurrent neural networks (RNN)

[23], have been applied to image restoration tasks achieving state-of-the-art results. Since most of the deep algorithms are supervised, in applications such as remote sensing or biomedical imaging, the model trained on a standard image corpus fails when applied to specific modalities. Furthermore, most of the existing deep methods focused on exploiting image local properties (due to limited receptive field) while largely ignoring NSS, which limit their performance in many image restoration applications.

Methods Non-local Shallow Deep Internal External
Model Model Prior Prior
TV [10]
GSR [11]
GMM [24]
WNNM [6]
ReconNet [7]
ISTA-Net [3]
NLRN [25]
SCSNet [8]
Proposed
Table 1: Comparison of the key attributes between the proposed H-PnP and other representative methods for image restoration.

Bearing the above concerns in mind, we propose a joint low-rank and deep (LRD) image model, which contains a pair of triply complementary priors, namely external and internal, deep and shallow, and local and non-local priors. Based on the LRD model, we propose a novel hybrid plug-and-play (H-PnP) framework for highly effective image CS. To the best of our knowledge, this is the first work to jointly exploit both NSS and deep priors under a unified framework for image restoration. Table 1 depicts the proposed LRD-based H-PnP scheme and some representative of existing reconstruction algorithms with their key attributes.

The main contributions of this paper are summarized as follows:

  • The proposed LRD model jointly exploits triply-complementary low-rank and deep denoiser priors, to take the advantages of NSS, scalable model richness, as well as good generalizability. In practice, LRD-based method significantly improves the visual quality of the CS reconstructed images (see an example in Fig. 1 (f)).

  • We propose the H-PnP framework for image CS based on the novel LRD model. To make the optimization tractable, we propose an efficient yet effective algorithm by applying alternating minimizing.

  • Extensive experimental results demonstrate that the proposed H-PnP based image CS algorithm has superior performance comparing to several popular or state-of-the-art image CS methods.

2 Related Works

In this section, we give a brief introduction on CS, deep PnP model and low-rank image modeling.

Compressive Sensing: The goal of image CS is to reconstruct the high-quality image from its undersampled (lower than Nyquist sampling rate [2]) measurements obtained by

(1)

where denotes the random projection matrix, , and is the additive noise or measurement error. The image CS is an ill-posed inverse problem as the measurement y has much lower dimension than that of the underlying image. Therefore, an effective prior is key to a successful image CS algorithm [10, 8, 11].

Deep Plug-And-Play Methods: Recent works on the plug-and-play (PnP) framework [26, 27, 21] allowed applying the effective image denoiser to solve the general inverse problems, such as image deblurring [21]

, image inpainting

[28] and computational imaging [29], etc. By decoupling the problem-specific sensing modality (i.e., the for image CS) from the general image priors, PnP provides a more flexible approach to generalize denoising algorithms to other more sophisticated applications. Very recent works [21, 29, 28, 27] applied state-of-the-art deep denoisers in PnP by solving the following maximum a posteriori (MAP) problem:

(2)

where denotes the fidelity term for the inverse problem (i.e., for image CS), and denotes the prior based on certain deep denoiser [21, 22, 27]. is a regularization parameter. By applying highly effective deep priors, the deep PnP approaches have achieved superior results in many image processing applications [21, 29, 28].

Low-Rank Image Modeling: Besides deep image priors, other image properties such as NSS [19], i.e., image patches are typically similar to other non-local structures within the same image, have been widely utilized for image restoration [11, 19, 6, 20]. Popular NSS-based methods proposed to group the similar patches, and exploit the patch correlation within each group. Specifically, overlapping patches are extracted from the image x. Taking each as the reference patch, its most similar patches are selected to construct each data matrix .

There are different methods to process the constructed data matrices, among which the low-rank (LR) modeling has demonstrated superior performance in many image restoration applications [6, 11, 14, 20]. Comparing to PnP approaches based on deep prior [21, 29, 28, 27], the LR-based methods are unsupervised and typically limited by the model flexibility. Moreover, such methods inevitably produce the ringing artifacts due to the aggregation of overlapping patches.

3 Proposed Method

As mentioned above, NSS-based methods and deep learning based methods have their respective merits and drawbacks. In this section, we propose a general Hybrid PnP (H-PnP) framework by combining the two triply-complementary priors, dubbed the joint low-rank and deep (LRD) prior.

3.1 H-PnP Framework

We now propose a novel H-PnP framework for image CS by incorporating the LRD prior by solving the following optimization problem,

(3)

Similar to the deep PnP problem in Eq. (2), the deep prior is applied in the proposed H-PnP scheme (i.e., Eq. (3)). denotes the Frobenius norm, and is the low-rank regularizer with a non-negative weight . The rank penalties are applied to exploit the image self-similarity. denotes the matrix formed by the set of similar patches for each reference patch

. The selected patches are vectorized and formed the columns of the matrix

, which is approximated by a corresponding low-rank matrix .

The proposed H-PnP formulation incorporates the LRD prior, which assumes that the underlying image x satisfies both LR and deep priors. Comparing to traditional deep PnP problem of Eq. (2), the proposed H-PnP scheme integrates a pair of triply-complementary priors using one unified the optimization problem.

4 Optimization for the Proposed Model

In this section, we present a highly effective image CS algorithm based on the proposed H-PnP using alternating minimizing. It can be seen that Eq. (3) is a large-scale non-convex optimization problem. To make the optimization tractable, we propose a simple alternating minimizing strategy to solve Eq. (3) for image CS, i.e., the algorithm alternates between solving Eq. (3) for and x, which corresponds to the Low-rank Approximation and Image Update sub-problem, respectively.

4.1 Low-Rank Approximation

For fixed x, we solve Eq. (3) for each by minimizing

(4)

In this work, we set the rank penalty to be the weighted nuclear norm, as the corresponding WNNM method [6] demonstrated superior performance in image restoration amongst other classical CS methods. There exists a closed-form solution to

by applying the singular value decomposition (SVD), with details steps in

[6].

4.2 Image Update

For fixed low-rank matrix , the image x can be updated by solving the following problem,

(5)

One can observe that it is quite difficult to solve Eq. (5) directly. In order to facilitate the optimization, we adopt the alternating direction method of multipliers (ADMM) algorithm [30] to solve x. Specifically, we introduce an auxiliary variable z with the constraint , then Eq. (5) can be rewritten as the following constrained problem,

(6)

We then invoke ADMM algorithm by iterating the following variable updates Eq. (7) to Eq. (9),

(7)
(8)
(9)

where is a balancing factor. One can observe that the sub-problems Eq. (7) and Eq. (8) for updating x and z, respectively, have efficient solutions. We will introduce the corresponding details below.

4.2.1 x Sub-problem

Given the obtained and z, x sub-problem in Eq. (7) is essentially a minimization problem of a strictly convex quadratic function. However, is a random projection matrix without a specific structure in image CS, which is expensive to directly compute matrix inversion. To avoid this issue, a gradient descent method [31] is applied, i.e.,

(10)

where represents the step size, and q is the gradient of the objective, which can be calculated as

(11)

where both and are pre-computed and fixed during the iterations.

0:  Measurement y and the sensing matrix .
1:  Set parameters , , , , , , , , and .
2:  Initialization:Estimate an initial image using a standard CS method (e.g., DCT/MH [32] based reconstruction method).
3:  for   do
4:     Divide into a set of overlapping patches with size ;
5:     for Each patch in  do
6:        Find similar patches to form a group matrix ;
7:        Singular value decomposition (SVD) for ;
8:        Update by invoking WNNM [6];
9:     end for
10:     ADMM:
11:     Initialization: ; ;
12:     Update by computing Eq. (11);
13:     Update by computing Eq. (13);
14:     Update by computing Eq. (9);
15:  end for
16:  Output: The final restored image = .
Algorithm 1 The Proposed H-PnP Algorithm for Image CS.

4.2.2 z Sub-problem

Given x, then z sub-problem in Eq. (8) can be rewritten as

(12)

where . From a Bayesian perspective, Eq. (12) is a Gaussian denoising problem for z

by solving a MAP problem, with the corresponding noise standard deviation to be

[21]. Accordingly, we denote such denoising problem as

(13)

where denotes a Gaussian denoiser based on the specific deep image prior . In general, any deep image Gaussian denoisier can be used as the in Eq. (13). In this paper, we apply a fast and flexible denoising CNN (FFDNet) [22] for Eq. (13), which is an efficient but effective CNN-based denoisier. Furthermore, FFDNet is capable of dealing with different standard deviations by self-adaption.

Till now, the efficient solution for each separated minimization sub-problem has been achieved, which makes the whole algorithm efficient and effective. After solving the above sub-problems, the complete description of the proposed H-PnP algorithm for image CS is summarized in Algorithm 1. The iterative algorithm in this paper is terminated when , where is a small constant.

5 Experimental Results

We conduct extensive experiments to evaluate the proposed H-PnP algorithm for image CS. Similar to the setups in previous works [8, 11, 10, 32], we simulate the image CS measurements at the block level (with the block size of

) using a Gaussian random projection matrix for each test image. The image CS algorithms are applied to reconstruct the image using the simulated CS measurements. We apply the peak signal to noise ratio (PSNR) to evaluate the CS reconstructed images.

5.1 Implementation and Parameters

We applied the pre-trained FFDNet denoiser [22] as the deep prior in the proposed H-PnP based image CS algorithm. The main parameters of the proposed method are set as follows. The size of each patch is set to 7

7, the number of patches grouped by KNN operator is

= 60, and the size of KNN search window is set to 2020. We set the maximum number of iterations to be = 60. The coefficients , , and are tuned for different sampling ratio in image CS (see the tuning procedure as well as the values for each sampling ratio in our shared code). The source code of the proposed HPnP method for image CS is available at: https://drive.google.com/open?id=1HcgKtj0r5SVpp6yKmRVyI9b8TdXjcT83.

Methods 0.1 0.2 0.3 0.4 0.5 Average
TV [10] 24.93 27.31 29.15 30.88 32.58 28.97
Rcos [9] 26.03 28.68 30.62 32.33 34.03 30.34
GSR [11] 25.83 29.28 31.82 34.02 36.11 31.41
JASR [13] 26.19 29.46 31.63 33.52 35.33 31.22
TNNM [14] 26.52 29.93 32.31 34.36 36.31 31.88
WNNM [6] 26.60 29.84 32.28 34.43 36.54 31.94
Proposed 27.41 30.49 32.79 34.86 36.82 32.47
Table 2: Average PSNR (dB) comparison of TV [10], Rcos [9], GSR [11], JASR [13], TNNM [14], WNNM [6] and the proposed H-PnP on BSD68 dataset [33].
Figure 2: Visual quality comparisons of image CS on image 253027 from BSD68 BSD68 [33] in the case of sampling ratio = 0.1.

5.2 Comparison with Classical Image CS Methods

We first compare the proposed H-PnP image CS algorithm to the popular or state-of-the-art classic methods, i.e., TV [10], Rcos [9], GSR [11], JASR [13], TNNM [14] and WNNM [6]. Amongst them, WNNM is a well-known non-local method which provides the state-of-the-art results for image denoising. We extend the WNNM denoising algorithm [6] to image CS by applying ADMM [30] which is similar to what we described in Section 4.2. The extended WNMM for image CS achieves the best results among all classic competing methods. We use the publicly available codes of other competing methods from their official websites with the default parameter settings for all experiments.

We simulate the image CS measurements for all test images using five different sampling ratios, i.e., 0.1, 0.2, 0.3, 0.4 and 0.5. Table 2 lists the average PSNRs over all test images from BSD68 (68 images) [33], obtained by our proposed H-PnP based image CS algorithm, as well as the six classic competing methods. It is clear that our proposed H-PnP consistently outperforms all competing methods on different sampling ratios for all datasets. On average, our proposed H-PnP enjoys a PSNR gain over TV by 3.50dB, over Rcos by 2.14dB, over GSR by 1.06dB, over JASR by 1.25dB, over TNNM by 0.59dB and over WNNM by 0.54dB. The visual quality comparisons of image 253027 in the case of sampling ratio of 0.1 are shown in Fig. 2. We have magnified a sub-region of each image to compare visual result of each competing method. It can be seen that TV cannot obtain a visual pleasant result. Rcos, GSR, JASR, TNNM and WNNM methods are all prone to produce some undesirable ringing artifacts. By contrast, the proposed H-PnP algorithm not only preserves fine image details, but also removes the visual artifacts significantly.

5.3 Comparison with Deep Image CS Methods

We now compare our proposed H-PnP with deep learning based methods including: SDA [16], ReconNet [7], IST-Net [3], IST-Net [3], CSNet [17] and SCSNet [8] methods. Note that SCSNet exploited a scale CNN that delivers state-of-the-art image CS performance. We follow [3] to use the images on BSD68 [33] as the test images. The average PSNR results of our proposed H-PnP as well as different deep learning based methods on four sampling ratios are shown in Table 3, where the results of SDA, ReconNet, ISTA-Net and ISTA-Net are from [3]. One can observe that our proposed H-PnP significantly outperforms all deep learning based methods. In particular, the proposed H-PnP achieves 0.76dB gains in average PSNR over SCSNet method. The visual comparisons of image 119082 on BSD68 dataset are shown in Fig. 3. It can be observed that some visual artifacts are still visible in all competing deep learning based methods. By contrast, our proposed H-PnP not only significantly removes undesirable artifacts across all the image, but also preserves large-scale sharp edges and small-scale fine details.

Methods 0.1 0.3 0.4 0.5 Average
SDA [16] 23.12 26.38 27.41 28.35 26.32
ReconNet [7] 24.15 27.53 29.08 29.86 27.66
IST-Net [3] 25.02 29.93 31.85 33.60 30.10
IST-Net [3] 25.33 30.34 32.21 34.01 30.47
CSNet [17] 27.10 31.45 33.46 34.90 31.73
SCSNet [8] 27.28 31.88 33.87 35.79 32.21
Proposed 27.41 32.79 34.86 36.82 32.97
Table 3: Average PSNR (dB) comparison of different deep learning based image CS reconstruction methods on BSD68 dataset [33].
Figure 3: Visual quality comparisons of image CS on image 119082 from BSD68 [33] in the case of sampling ratio = 0.1.

6 Conclusion

This paper proposed a joint low-rank and deep (LRD) image model, which comprises a pair of triply complementary priors, namely external and internal, deep and shallow, and local and non-local priors. We have then proposed a H-PnP framework based on the LRD model to solve image CS problem along with an alternating minimization method. Experimental results have demonstrated that the proposed H-PnP based image CS algorithm significantly outperforms many state-of-the-art image CS methods. Future work lies in the theoretical analysis of our proposed model and apply it to other image restoration tasks.

References

  • [1] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on information theory, vol. 52, no. 2, pp. 489–509, 2006.
  • [2] D. L. Donoho, “Compressed sensing,” IEEE Transactions on information theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  • [3] J. Zhang and B. Ghanem, “Ista-net: Interpretable optimization-inspired deep network for image compressive sensing,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2018, pp. 1828–1837.
  • [4] B. Wen, S. Ravishankar, L. Pfister, and Y. Bresler, “Transform learning for magnetic resonance image reconstruction: From model-based learning to building neural networks,” IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 41–53, 2020.
  • [5] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine, vol. 25, no. 2, pp. 83–91, 2008.
  • [6] S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang, “Weighted nuclear norm minimization and its applications to low level vision,” International journal of computer vision, vol. 121, no. 2, pp. 183–208, 2017.
  • [7] K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok, “Reconnet: Non-iterative reconstruction of images from compressively sensed measurements,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 449–458.
  • [8] W. Shi, F. Jiang, S. Liu, and D. Zhao, “Scalable convolutional neural network for image compressed sensing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 12290–12299.
  • [9] J. Zhang, D. Zhao, C. Zhao, R. Xiong, S. Ma, and W. Gao, “Image compressive sensing recovery via collaborative sparsity,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 2, no. 3, pp. 380–391, 2012.
  • [10] C. Li, W. Yin, and Y. Zhang, “User s guide for tval3: Tv minimization by augmented lagrangian and alternating direction algorithms,” CAAM report, vol. 20, no. 46-47, pp. 4, 2009.
  • [11] J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3336–3351, 2014.
  • [12] Z. Zha, X. Yuan, B. Wen, J. Zhou, J. Zhang, and C. Zhu, “A benchmark for sparse coding: When group sparsity meets rank minimization,” IEEE Transactions on Image Processing, vol. 29, pp. 5094–5109, 2020.
  • [13] N. Eslahi and A. Aghagolzadeh, “Compressive sensing image restoration using adaptive curvelet thresholding and nonlocal sparse regularization,” IEEE Transactions on Image Processing, vol. 25, no. 7, pp. 3126–3140, 2016.
  • [14] T. Geng, G. Sun, Y. Xu, and J. He, “Truncated nuclear norm minimization based group sparse representation for image restoration,” SIAM Journal on Imaging Sciences, vol. 11, no. 3, pp. 1878–1897, 2018.
  • [15] X. Yuan and Y. Pu, “Parallel lensless compressive imaging via deep convolutional neural networks,” Optics Express, vol. 26, no. 2, pp. 1962–1977, 2018.
  • [16] A. Mousavi, A. B. Patel, and R. G. Baraniuk, “A deep learning approach to structured signal recovery,” in 2015 53rd annual allerton conference on communication, control, and computing (Allerton). IEEE, 2015, pp. 1336–1343.
  • [17] W. Shi, F. Jiang, S. Liu, and D. Zhao, “Image compressed sensing using convolutional neural network,” IEEE Transactions on Image Processing, vol. 29, pp. 375–388, 2020.
  • [18] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image processing, vol. 15, no. 12, pp. 3736–3745, 2006.
  • [19] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Non-local sparse models for image restoration,” in 2009 IEEE 12th international conference on computer vision. IEEE, 2009, pp. 2272–2279.
  • [20] Z. Zha, X. Yuan, B. Wen, J. Zhou, J. Zhang, and C. Zhu, “From rank estimation to rank approximation: Rank residual constraint for image restoration,” IEEE Transactions on Image Processing, vol. 29, pp. 3254–3269, 2020.
  • [21] K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3929–3938.
  • [22] K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn-based image denoising,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4608–4622, 2018.
  • [23] J. Zhang, J. Pan, J. Ren, Y. Song, L. Bao, R. W. Lau, and M. H. Yang, “Dynamic scene deblurring using spatially variant recurrent neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2521–2529.
  • [24] J. Yang, X. Liao, X. Yuan, P. Llull, D. J. Brady, G. Sapiro, and L. Carin,

    “Compressive sensing by learning a gaussian mixture model from measurements,”

    IEEE Transactions on Image Processing, vol. 24, no. 1, pp. 106–119, 2014.
  • [25] D. Liu, B. Wen, Y. Fan, C. Loy, and T. S. Huang, “Non-local recurrent network for image restoration,” in Advances in Neural Information Processing Systems, 2018, pp. 1673–1682.
  • [26] S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in 2013 IEEE Global Conference on Signal and Information Processing. IEEE, 2013, pp. 945–948.
  • [27] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9446–9454.
  • [28] T. Tirer and R. Giryes, “Image restoration by iterative denoising and backward projections,” IEEE Transactions on Image Processing, vol. 28, no. 3, pp. 1220–1234, March 2019.
  • [29] A. M. Teodoro, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “A convergent image fusion algorithm using scene-adapted gaussian-mixture-based denoising,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 451–463, Jan 2019.
  • [30] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, et al., “Distributed optimization and statistical learning via the alternating direction method of multipliers,”

    Foundations and Trends® in Machine learning

    , vol. 3, no. 1, pp. 1–122, 2011.
  • [31] P. Deift and X. Zhou, “A steepest descent method for oscillatory riemann–hilbert problems. asymptotics for the mkdv equation,” Annals of Mathematics, vol. 137, no. 2, pp. 295–368, 1993.
  • [32] C. Chen, E. W. Tramel, and J. E. Fowler, “Compressed-sensing recovery of images and video using multihypothesis predictions,” in 2011 conference record of the forty fifth asilomar conference on signals, systems and computers (ASILOMAR). IEEE, 2011, pp. 1193–1198.
  • [33] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 5, pp. 898–916, 2010.