1 Introduction
The developments in deep learning
goodfellow2016deep in the recent years have led to significant improvement in learning generative models kingma2013auto ; goodfellow2014generative ; radford2015unsupervised ; van2016conditional ; bojanowski2018optimizing ; arjovsky2017wasserstein ; berthelot2017began ; karras2017progressive . Methods like variational autoencoders (VAEs) kingma2013auto , generative adversarial networks (GANs) goodfellow2014generative and latent space optimizations (GLOs) bojanowski2018optimizing have found success at modeling data distributions. However, for complex classes of images, such as human faces, while these methods can generate nice examples, their representation capabilities do not capture the full distribution. This phenomenon is sometimes referred to in the literature, especially in the context of GANs, as mode collapse arjovsky2017wasserstein ; karras2017progressive . Yet, as demonstrated in richardson2018gans , it is common to other recent learning approaches as well.Another line of works that has gained a lot from the developments in deep learning is imaging inverse problems, where the goal is to recover an image from its degraded or compressed observations bertero1998introduction
. Most of these works have been focused on training a convolutional neural network (CNN) to learn the inverse mapping from
to for a specific observation model (e.g. superresolution with certain scale factor and bicubic antialiasing kernel dong2014learning ). Yet, recent works have suggested to use neural networks for handling only the image prior in a way that does not require exhaustive offline training for each different observation model. This can be done by using CNN denoisers zhang2017learning ; meinhardt2017learning ; rick2017one plugged into iterative optimization schemes venkatakrishnan2013plug ; tirer2019image , training a neural network from scratch for the imaging task directly on the test image (based on internal recurrence of information inside a single image) ZSSR ; ulyanov2017deep , or using generative models bora2017compressed ; yeh2017semantic ; hand2018phase .Methods that use generative models as priors can only handle images that belong to the class or classes on which the model was trained. However, the generative learning equips them with valuable semantic information that other strategies lack. For example, a method which is not based on a generative model cannot produce a perceptually pleasing image of human face if the eyes are completely missing in an inpainting task yeh2017semantic . The main drawback in restoring complex images using generative models is the limited representation capabilities of the generators. Even when one searches over the range of a pretrained generator for an image which is closest to the original , he is expected to get a significant mismatch bora2017compressed .
In this work, we propose a strategy to mitigate the limited representation capabilities of generators when solving inverse problems. The strategy has two components. The first is an internal learning phase at test time that makes the generator imageadaptive, and the second is an explicit backprojection step that enforces compliance of the restoration with the observations . We empirically demonstrate the advantages of our proposed approach for image superresolution and compressed sensing.
2 Related work
Our work is mostly related to the recent work bora2017compressed , which has suggested to use pretrained generative models for the compressive sensing (CS) task donoho2006compressed ; candes2004robust : reconstructing an unknown signal from observations of the form
(1) 
where is an measurement matrix, represents the noise, and the number of measurements is much smaller than the ambient dimension of the signal, i.e. . Following the fact that in highly popular generative models, such as GANs, VAEs and GLOs, a generator learns a mapping from a low dimensional space to the signal space , Bora et al. bora2017compressed
have proposed a method, termed CSGM, that estimates the signal as
, where is obtained by minimizing the nonconvex^{1}^{1}1The function is nonconvex due to the nonconvexity of . cost function(2) 
using backpropagation and standard gradient based optimizers.
For specific classes of images, such as handwritten digits and human faces, experiments have shown that this approach can reconstruct nice looking images with much fewer measurements than the classical sparsitybased Lasso method tibshirani1996regression . However, unlike Lasso, it has been also shown that CSGM cannot provide accurate recovery even when there is no noise and the number of observations is very large. This shortcoming is mainly due to the limited representation capabilities of the generative models (see Section 6.3 in bora2017compressed ), and is common to related recent works hand2018phase ; bora2018ambientgan ; shah2018solving .
Note that using specific structures of , the model (1) can be used for different imaging inverse problems, making the CSGM method applicable for these problems as well. For example, it can be used for denoising task when is the identity matrix , inpainting task when is an sampling matrix (i.e. a selection of rows of ), deblurring task when is a blurring operator, and superresolution task if is a composite operator of blurring (i.e. antialiasing filtering) and downsampling.
Our imageadaptive approach is inspired by tirer2018super , which is influenced itself by ZSSR ; ulyanov2017deep . These works follow the idea of internal recurrence of information inside a single image within and across scales glasner2009super ; huang2015single . However, while the two methods ZSSR ; ulyanov2017deep completely avoid an offline training phase and optimize the weights of a deep neural network only in the test phase ulyanov2017deep , the work in tirer2018super incorporates external and internal learning by taking offline trained CNN denoisers, finetuning them in test time and then plugging them into a modelbased optimization scheme tirer2019image . Note, though, that the internal learning phase in tirer2018super uses patches from
as the ground truth for a denoising loss function (
), building on the assumption that directly includes patterns which recur also in . Therefore, this method requires that is not very degraded, which makes it suitable perhaps only for the superresolution task, similarly to ZSSR , which is also restricted to this problem. Note that the approach in ulyanov2017deep can be applied to different observation models, but each one requires modified network architecture and training process. In contrast, in our work the testtime optimization is done on the same generator using a general fidelity term (2) which is suitable to many observation models. Moreover, our generative prior has semantic knowledge that other priors lack, and our internal learning phase yields a significantly better restoration by diminishing the disability of to represent .Using backprojections (BP) in applications such as tomographic reconstruction and superresolution is a rather old strategy hayner1984missing ; irani1991improving . In many recent sophisticated superresolution algorithms it is often used as a postprocessing step that moderately improves the results glasner2009super ; huang2015single ; ZSSR ; yang2010image . However, we have not encountered its usage in reconstructions based on generative priors. In this paper we show that this simple technique is highly effective in mitigating the limited representation capabilities of generators.
3 The proposed methods
In this work, our goal is to make the solutions of inverse problems using generative models more faithful to the observations and more accurate, despite the limited representation capabilities of the pretrained generators. To this end, we examine two strategies.
3.1 "Hard" compliance to observations via backprojecting
Denote by an estimation of , e.g. using the CSGM method bora2017compressed . Assuming that there is no noise, i.e. , a simple way to strictly enforce compliance of the restoration with the observations is backprojecting (BP) the estimator onto the affine subspace
(3) 
Note that this problem has a closedform solution
(4) 
where is the pseudoinverse of (assuming that , which is the common case, e.g. in superresolution and compressed sensing tasks). In practical cases, where the problem dimensions are high, the matrix inversion in can be avoided by using the conjugate gradient method that only requires applying the operators and hestenes1952methods .
Let denote the orthogonal projection onto the row space of , and denote its orthogonal complement. Substituting (1) into (3.1) gives
(5) 
which shows that the reconstruction is consistent with on , and considers only the projection of onto the null space of . Thus, for an estimate obtained using a generative model, the BP technique essentially eliminates the component of the generator’s representation error that resides in the subspace spanned by the rows of , but does not change the component in the null space of . Still, from the (Euclidean) accuracy point of view, this strategy is very effective, as demonstrated in the experiments section.
As can be seen in (5), in case of noisy observations, i.e. , a final denoising step might be required. In our experiments for white Gaussian noise , offtheshelf Gaussian denoisers were suffice, despite the fact that the noise is colored by .
3.2 "Soft" compliance to observations via imageadaptation
Our second strategy to handle the limited representation capabilities of the generators is making them imageadaptive (IA) by internal learning in testtime. In details, instead of recovering the latent signal as , where is a pretrained generator and is a minimizer of (2), we propose to simultaneously optimize and the parameters of the generator, denoted as , by minimizing the cost function
(6) 
The optimization is done using backpropagation and standard gradient based optimizers. The initial value of is the pretrained weights, and the initial value of is , obtained by minimization with respect to alone, as done in CSGM. Then, we perform jointminimization to obtain and , and recover the signal using .
The reasoning behind our approach is that while the current leading learning strategies cannot train a generator whose representation range covers every sample of a complex distribution, they do allow to modify it to better represent a single specific sample that agrees with the observations . Yet, contrary to prior works that optimize the weights of neural networks only by internal learning ZSSR ; ulyanov2017deep , here we incorporate information captured in the testtime with the valuable semantic knowledge obtained by the offline generative learning. To we make sure that information captured in testtime does not come at the expense of offline information which is useful for the test image at hand, we start with optimizing alone, as mentioned above, and for the joint minimization we use small learning rates and early stopping, as described in the experiments section below.
Finally, we note that this strategy enforces only a "soft" compliance of the restoration with the observations , because our gentle joint optimization, which prevents overriding the offline semantic information, may not completely diminish the component of the generator’s representation error that resides in the subspace spanned by the rows of , as done by BP. On the other hand, intuitively, the strong prior (imposed by the offline training and by the generator’s structure) is expected to improve the restoration also in the null space of (unlike BP). Indeed, as shown below, by combining the two approaches, i.e. applying the IA phase and then the BP on , we obtain better results than only applying BP on . This obviously implies decreasing the component of reconstruction error in the null space of .
4 Experiments
In our experiments we use two recently proposed GAN models, which are known to generate very high quality samples of human faces. The first model is BEGAN berthelot2017began , trained on CelebA dataset liu2015deep , which generates a
image from a uniform random vector
. The second model is PGGAN karras2017progressive , trained on CelebAHQ dataset karras2017progressive that generates a image from a Gaussian random vector . We use the official pretrained models, and for details on the architectures and training procedures of these models we refer the reader to their original publications berthelot2017began ; karras2017progressive . Note that previous works, which use generative models for solving inverse problems, have considered much simpler datasets, such as MNIST lecun1998gradient or a small version of CelebA (downscaled to size ), which perhaps do not demonstrate how severe the effect of mode collapse is.The testtime procedure is done as follows, and is almost the same for the two models. For CSGM we follow bora2017compressed and optimize (2) using ADAM optimizer kingma2014adam with learning rate of 0.1. We use 1600 iterations for BEGAN and 1800 iterations for PGGAN. The final , i.e. , is chosen to be the one with minimal objective value along the iterations, and the CSGM recovery is . Performing a postprocessing BP step (3.1) gives us also a reconstruction that we denote by CSGMBP.
In the reconstruction based on imageadaptive GANs, which we denote by IAGAN, we initialize with , and then optimize (6) jointly for and (the generator parameters). For compressed sensing we use ADAM with learning rate of for and in the case of BEGAN, and learning rates of and for and respectively in the case of PGGAN. For superresolution we use ADAM with learning rates of and for and respectively in both GANs. We run the algorithm for 200 iterations in both compressed sensing and superresolution when using BEGAN, where in PGGAN we run for 300 and 150 iterations in compressed sensing and superresolution respectively. The final minimizers and are chosen according to the minimal objective value, and the IAGAN result is obtained by . Another recovery, which uses both two strategies from Section 3, is obtained by performing a postprocessing BP step (3.1) on . We denote this reconstruction by IAGANBP.
4.1 Compressed sensing
The first experiment is done for compressed sensing using Gaussian measurement matrix with i.i.d. entries drawn from , similar to the experiments in bora2017compressed . In this case, there is no efficient way to implement the operators and . Therefore, we consider only the BEGAN that generates images (i.e. ), which are much smaller than those generated by PGGAN.
Figure 1 shows the reconstruction mean squared error (MSE) of the different methods as we change the number of measurements (i.e. we change the compression rate ). The results are averages over 20 images from CelebA dataset. It is clearly seen that IAGAN outperforms CSGM for all the values of . Note that due to the limited representation capabilities of BEGAN (or equivalently – its mode collapse), CSGM performance reaches a plateau in a quite small value of , contrary to IAGAN error that continues to decrease. The backprojection strategy is shown to be very effective, as it makes sure that CSGMBP is rescued from the plateau of CSGM. The fact that IAGAN still has a very small error when the compression ratio is almost 1 follows from our small learning rates and early stopping, which have been found necessary for small values of , where the null space of is very large and it is important to avoid overriding the offline semantic information. However, this small error is often barely visible, as demonstrated by visual results in Figure 2, and further decreases by the BP step of IAGANBP.




In order to examine our proposed strategies for the larger model PGGAN as well, we turn to use a different measurement matrix
which can be applied efficiently – the subsampled Fourier transform. This acquisition model is also more common in practice, e.g. in compressed sensing MRI
lustig2007sparse . Furthermore, this choice of simplifies the BP step, because is simply the Hermitian transpose of (this property follows from the fact that the subsampled Fourier transform is a tight frame kovavcevic2008introduction ). We check compression scenarios with ratios 0.3 and 0.5. The PSNR results (averaged on 100 images from each dataset) are given in Table 1, and several visual examples are shown in Figures 3 and 4. In Figure 3 we also present the binary masks used for 30% and 50% Fourier domain sampling of images in CelebA. The binary masks that have been used for CelebAHQ have similar form.The unsatisfactory results obtained by CSGM clearly demonstrate the limited capabilities of both BEGAN and PGGAN for reconstruction, despite the fact that both of them can generate very nice samples berthelot2017began ; karras2017progressive . In all the scenarios our BP and IA strategies improve the results, where the latter seems more effective. Yet, recall that the BP step improves only the reconstruction in the row space of . Therefore, while applying it without taking care of the reconstruction error in the null space of does improves the PSNR, it also leads to visual artifacts (see the results of CSGMBP). Performing BP on the recovery of IAGAN to obtain IAGANBP seems to be the best, as it enjoys the IA strategy that improves the results also in the null space of .
BEGAN  naive IFFT  CSGM  CSGMBP  IAGAN  IAGANBP 

CS ratio 0.3  19.52  20.15  22.08  26.15  26.45 
CS ratio 0.5  21.10  20.30  23.24  28.36  29.00 
PGGAN  naive IFFT  CSGM  CSGMBP  IAGAN  IAGANBP 

CS ratio 0.3  19.92  20.68  21.91  24.53  25.02 
CS ratio 0.5  20.88  21.24  23.31  26.26  27.38 
Compressed sensing with 30% (top) and 50% (bottom) subsampled Fourier measurements using BEGAN. From left to right: original image, binary mask for Fourier domain sampling, naive reconstruction (zero padding and IFFT), CSGM, CSGMBP, IAGAN, and IAGANBP.
4.2 Superresolution
We turn to examine the superresolution task, for which is a composite operator of blurring with a bicubic antialiasing kernel followed by downsampling. For BEGAN we use superresolution scale factor of 4, and for PGGAN we use scale factors of 8 and 16. Similar to previous compressed sensing experiment, again the BP step can be computed efficiently, because can be implemented by bicubic upsampling. The PSNR results (averaged on 100 images from each dataset) of the different methods are given in Table 2, and several visual examples are shown in Figures 5 and 6.
Once again, the results of the plain CSGM are not satisfying, due to limited representation capabilities of BEGAN and PGGAN. This time the simple BP postprocessing is very effective in reducing the representation error. The results of CSGMBP have less artifacts than in the compressed sensing experiments, and they are even often better than IAGAN. Yet, IAGANBP that combines the two strategies obtains the best perceptual quality in all experiments and, except from the experiment with scale factor 16, also the best PSNR. It may seem somewhat surprising that the simple bicubic upsampling has competitive accuracy, i.e. PSNR, with our more sophisticated methods IAGAN and IAGANBP. However, recall that this interpolation technique uses the observed information as is, and does not hallucinate any details that may improve its perceptual quality (indeed, the bicubic recoveries in Figures
5 and 6 are very blurry). In contrast, the CSGM and IAGAN build on a generative prior which encourages good perceptual quality (e.g. during adversarial training, the generator tries to fool the discriminator with natural looking images and does not have any distortion considerations). Therefore, as discussed in blau2018perception , we pay some price for the inherent tradeoff between distortion and perception.BEGAN  Bicubic  CSGM  CSGMBP  IAGAN  IAGANBP 
SR x4  26.20  20.40  26.22  25.75  26.33 
PGGAN  Bicubic  CSGM  CSGMBP  IAGAN  IAGANBP 

SR x8  28.91  21.80  28.41  27.14  29.00 
SR x16  26.82  21.56  26.07  25.32  26.10 
The next experiment is an extreme demonstration of mode collapse. In this scenario we use the BEGAN model to recover slightly misaligned images – the images are vertically translated by a few pixels. The results are shown in Figure 7. The CSGM is highly susceptible to the poor capabilities of BEGAN in this case, while our IAGAN strategy is quite robust.
The final experiment considers superresolution of noisy images. The goal is to demonstrate that even though the BP strategy is justified in Section 3.1 for the noiseless case, it is still effective for moderate noise levels. However, since may amplify the noise, a final denoising step should be performed. We note that for very high noise levels, a simple and perhaps better alternative is to denoise the input lowresolution image (i.e. as a preprocessing) instead of the output of the BP stage. Yet, doing it for moderate levels of noise will cost in loss of fine details that can be exploited by the superresolution method. The experiment is performed with the BEGAN model, and
is white Gaussian noise with standard deviation of 0.05 (i.e. 12.75/255). We use CBM3D denoiser
dabov2007color as a final phase for CSGMBP and IAGANBP. Visual results are demonstrated in Figure 8. It is clear the the BP procedure compensated by denoising significantly improves the performance of CSGM. For our IAGAN approach, the improvement is more moderate, since IAGAN reconstruction is already pleasant. Both methods yield sharper recoveries than the plain bicubic upsampling.5 Conclusion
In this work we considered the usage of generative models for solving imaging inverse problems. The main deficiency in such applications is the limited representation capabilities of the generators, which unfortunately do not capture the full distribution for complex classes of images. We suggested two strategies for mitigating this problem. The first is a final backprojection (BP) step that eliminates the component of the generator’s representation error that resides in the row space of the measurement matrix. The second is an image adaptive approach, termed IAGAN, that improves the generator capability to represent the specific test image, and can improve also the restoration in the null space of the measurement matrix. One can also use these strategies together. Experiments on compressed sensing and superresolution tasks demonstrated that our strategies yield significantly improved reconstructions, which are both more accurate and perceptually pleasing than other alternatives, especially when the imageadaptive approach is being used.
Acknowledgments
This work was supported by the European research council (ERC StG 757497 PI Giryes).
References
 (1) I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. 2016.
 (2) D. P. Kingma and M. Welling, “Autoencoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
 (3) I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014.
 (4) A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
 (5) A. Van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves, et al., “Conditional image generation with pixelcnn decoders,” in Advances in neural information processing systems, pp. 4790–4798, 2016.

(6)
P. Bojanowski, A. Joulin, D. LopezPas, and A. Szlam, “Optimizing the latent
space of generative networks,” in
International Conference on Machine Learning
, pp. 599–608, 2018.  (7) M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in International Conference on Machine Learning, pp. 214–223, 2017.
 (8) D. Berthelot, T. Schumm, and L. Metz, “Began: Boundary equilibrium generative adversarial networks,” arXiv preprint arXiv:1703.10717, 2017.
 (9) T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017.
 (10) E. Richardson and Y. Weiss, “On GANs and GMMs,” in Advances in Neural Information Processing Systems, pp. 5852–5863, 2018.
 (11) M. Bertero and P. Boccacci, Introduction to inverse problems in imaging. CRC press, 1998.

(12)
C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network
for image superresolution,” in
European conference on computer vision
, pp. 184–199, Springer, 2014. 
(13)
K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for
image restoration,” in
IEEE Conference on Computer Vision and Pattern Recognition
, pp. 3929–3938, 2017.  (14) T. Meinhardt, M. Moller, C. Hazirbas, and D. Cremers, “Learning proximal operators: Using denoising networks for regularizing inverse imaging problems,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1781–1790, 2017.
 (15) J. Rick Chang, C.L. Li, B. Poczos, B. Vijaya Kumar, and A. C. Sankaranarayanan, “One network to solve them all–solving linear inverse problems using deep projection models,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 5888–5897, 2017.
 (16) S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plugandplay priors for model based reconstruction,” in Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE, pp. 945–948, IEEE, 2013.
 (17) T. Tirer and R. Giryes, “Image restoration by iterative denoising and backward projections,” IEEE Transactions on Image Processing, vol. 28, no. 3, pp. 1220–1234, 2019.
 (18) A. Shocher, N. Cohen, and M. Irani, “"zeroshot" superresolution using deep internal learning,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
 (19) D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in CVPR, 2018.
 (20) A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in International Conference on Machine Learning, pp. 537–546, 2017.

(21)
R. A. Yeh, C. Chen, T. Yian Lim, A. G. Schwing, M. HasegawaJohnson, and M. N. Do, “Semantic image inpainting with deep generative models,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493, 2017.  (22) P. Hand, O. Leong, and V. Voroninski, “Phase retrieval under a generative prior,” in Advances in Neural Information Processing Systems, pp. 9154–9164, 2018.
 (23) D. Donoho, “Compressed sensing,” IEEE Transactions on information theory, vol. 52, no. 4, pp. 1289–1306, 2006.
 (24) E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on information theory, vol. 52, no. 2, pp. 489–509, 2006.
 (25) R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996.
 (26) A. Bora, E. Price, and A. G. Dimakis, “Ambientgan: Generative models from lossy measurements,” in International Conference on Learning Representations (ICLR), 2018.
 (27) V. Shah and C. Hegde, “Solving linear inverse problems using gan priors: An algorithm with provable guarantees,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4609–4613, IEEE, 2018.
 (28) T. Tirer and R. Giryes, “Superresolution via imageadapted denoising CNNs: Incorporating external and internal learning,” IEEE Signal Processing Letters, 2019.
 (29) D. Glasner, S. Bagon, and M. Irani, “Superresolution from a single image,” in Computer Vision, 2009 IEEE 12th International Conference on, pp. 349–356, IEEE, 2009.
 (30) J.B. Huang, A. Singh, and N. Ahuja, “Single image superresolution from transformed selfexemplars,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5197–5206, 2015.
 (31) D. Hayner and W. Jenkins, “The missing cone problem in computer tomography,” Advances in Computer Vision and Image Processing, vol. 1, pp. 83–144, 1984.
 (32) M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graphical models and image processing, vol. 53, no. 3, pp. 231–239, 1991.
 (33) J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image superresolution via sparse representation,” IEEE transactions on image processing, vol. 19, no. 11, pp. 2861–2873, 2010.
 (34) M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, vol. 49. 1952.
 (35) Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the IEEE international conference on computer vision, pp. 3730–3738, 2015.
 (36) Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al., “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
 (37) D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
 (38) M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid mr imaging,” Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007.
 (39) J. Kovacevic, A. Chebira, et al., “An introduction to frames,” Foundations and Trends® in Signal Processing, vol. 2, no. 1, pp. 1–94, 2008.
 (40) Y. Blau and T. Michaeli, “The perceptiondistortion tradeoff,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6228–6237, 2018.
 (41) K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Color image denoising via sparse 3D collaborative filtering with grouping constraint in luminancechrominance space,” in 2007 IEEE International Conference on Image Processing, vol. 1, pp. I–313, IEEE, 2007.
Comments
There are no comments yet.