1 Introduction
Over the past decades, inverse problems have been widely studied in image and signal processing and computer vision, e.g., denoising
[15], deconvolution [2], superresolution [48] and compressive sensing [18]. An inverse problem is resulted from the forward model which maps unknown signals, i.e., the groundtruth, to acquired/observed information about them, which we call data or measurements. This forward problem, generally relies on a developed physical theory which reveals the link between the groundtruth and the measurements. Solving inverse problems involves learning the inverse mapping from the measurements to the groundtruth. Specifically, it recovers a signal from one or a small number of degraded or noisy measurements, which is usually illposed [42]. Mathematically, the goal is to reconstruct a high dimensional groundtruth from a low dimensional measurement denoted as , which is reduced from by a a forward model such that . This forward model is constructed to tie the observed data to a set of learned model parameters . For example, in compressive sensing, is a compressive measurement with random sampled regions and is the measurement matrix, e.g., a random Gaussian matrix; in superresolution, is a lowresolution image and the operation downsamples high resolution images. The main difficulty of these underdetermined systems comes from the operator which has a nontrivial null space leading to an infinite number of feasible solutions. Though most of the inverse problems are formulated directly to the setting of an optimization problem associated with the forward model [41], a number of learningbased algorithms have been proposed to solve inverse problems by learning a mapping from the measurement domain of to the signal space of , with the help of large datasets and neural nets [32, 14]. ^{1}^{1}1These algorithms refer to directly learning optimum mappings from the observed data to their highresolution correspondents and are different from learning from training datasets some specific priors to be incorporated in the regularized iterative algorithms[15, 47].More recently, deep learning techniques have arisen as a promising framework and gained great popularity for providing stateoftheart performance on applications include pattern analysis (unsupervised), classification (supervised), computer vision, image processing, etc [13]
. Exploiting deep neural networks to solve inverse problems has been explored recently
[14, 40, 1, 24]. In these works, inverse problems are viewed as a pattern mapping problem and most existing learningbased methods propose to learn an endtoend mapping from to [37, 40]. By leveraging the powerful approximation ability of deep neural networks, these deep learning based datadriven methods have achieved stateoftheart performance in many challenging inverse problems like superresolution [5, 14, 40], image reconstruction [35], automatic colorization [27]. More specifically, massive datasets currently enables learning endtoend mappings from the measurement domain to the target image/signal/data domain to help deal with these challenging problems instead of solving the inverse problem by inference. A strong motivation to use neural networks stems from the universal approximation theorem [11], which states that a feedforward network with a single hidden layer containing a finite number of neurons can approximate any continuous function on compact subsets of
, under mild assumptions on the activation function. In these recent works
[5, 40, 27, 35], an endtoend mapping from measurements to groundtruth was learned from the training data and then applied to the testing data. Thus, the complicated inference scheme needed in the conventional inverse problem solver was replaced by feeding a new measurement through the pretrained network, which is much more efficient. However, despite their superior performance, these specificallytrained solvers are designed for specific inverse problems and usually cannot be reused to solve other problems without retraining the mapping function  even when the problems are similar. To improve the scope and generability of deep neural network models, more recently, in [7], a splitting strategy was proposed to decompose an inverse problem into two optimization problems, where one subproblem, related to regularization, can be solved efficiently using trained deep neural networks, leading to an alternating direction method of multipliers (ADMM) framework [4, 31]. This method involved training a deep convolutional autoencoder network for lowlevel image modeling, which explicitly imposed regularization that spanned the subspace that the groundtruth images lived in. For the subproblem that required inverting a big matrix, a conventional gradient descent algorithm was used, leading to an alternating update, iterating between feedforward propagation through a network and iterative gradient descent. Thus, an inner loop for gradient descent is still necessary in this framework. A similar approach to learn approximate ISTA (Iterative ShrinkageThresholding Algorithm) with neural networks was illustrated in [21]. A more flexible splitting strategy of training two reusable neural networks for the two subproblems within ADMM framework leading to an innerloop free update rule has been proposed in [17]. The two pretrained deep neural networks are flexible and reusable in the sense that the trained network for the proximity operator can be used as a plugandplay prior to regularize other inverse problems sharing similar statistical characteristics and the trained network for the inversion operator can be used to solve the same inverse problem for other datasets.To leverage the advantages of both the endtoend mapping and the splitting strategy, in this work, we propose to learn an endtoend mapping consisting of two networks, with one handling the inversion associated with the forward physical model and the other one handling the denoising, respectively. A degraded signal, i.e., an observed data point is fed into the inverse network to output an intermediate update and then fed into the denoising network to refine the result. The intermediate update bridges the information gap between the input, i.e., the degraded signal and the output, i.e., the groundtruth. The training for the proposed InverseNet is annealing in the sense that the input of the denoising network, e.g., the denoising autoencoder, referred to as DAE (as an example explained later) or equivalently, the output of the inversion networks, e.g., the UNets (as an example explained later), progressively becomes better (closer to the groundtruth) following by a refined (better) result output by the denoising network, as displayed in Fig.
1. This training leverages both the data term by the UNets and a generative prior by the DAEs. More specifically (and heuristically), the inversion network tries to restore the information lost in the forward model, and the denoising network tries to refine the result with learned details, inspired by the two update steps of ADMM.


Contributions: We propose to learn an endtoend mapping from the observed data to the groundtruth. Inspired by the splitting strategy widely used to solve inverse problems as discussed above, instead of learning one network to solve them all, we propose to decompose the endtoend mapping into two parts with one handling the model inversion part and the other one handling the denoising part. There are several benefits to such structure: i) Any existing endtoend learning algorithm can be regarded as the inversion part and incorporated with any existing denoising network. ii) The two network can be trained jointly to learn the mapping to its best extent, getting rid of any twostep suboptimal solution. iii) In the testing phase, only one feedforward propagation is necessary to solve the inverse problem, getting rid of any further iterations.
2 Inverse Problems
As explained above, the forward model connects the low dimensional measurement to high dimensional groundtruth by a linear operator as . The fact that
makes the number of parameters to estimate larger than the number of available data points in hand. Since
is an underdetermined measurement matrix, this imposes an illposed problem for finding solution on a new observation . For instance, the matrixis a strided Gaussian convolution and not invertible for superresolution tasks in
[37, 40]. To address this issue, computational solutions including approximate inference based on Markov chain Monte Carlo (MCMC) and optimization based on variable splitting under the ADMM framework, were proposed and applied to different kinds of priors, e.g., the empirical Gaussian prior
[45, 48], the Total Variation prior [38], etc. The ADMM framework is popular due to its low computational complexity and recent success in solving large scale optimization problems. Mathematically, the optimization problem is formulated as(1) 
where the introduced auxiliary variable is forced to be equal to , and models the structure promoted by the prior/regularization. To leverage the befinits of ‘big data’, the regularization can be imposed in an empirical Bayesian way, by designing an implicit data dependent prior on , i.e., for amortized inference [40]. The augmented Lagrangian for Eq. (1) by replacing with is
(2) 
where is the Lagrange multiplier and is the penalty parameter. The conventional augmented Lagrange multiplier method that minimizes w.r.t. and simultaneously, is difficult and does not exploit the fact that the objective function is separable. To tackle this problem, ADMM decomposes the minimization into two subproblems, i.e., minimizations w.r.t. and , respectively. More specifically, the updates are as follows:
(3)  
(4)  
(5) 
For the priors of special forms, such as , a closedform solution for Eq. , i.e., a soft thresholding solution is easily obtained. On the contrary, for some more sophiscated regularizations, e.g., a patch based prior [15, 47], solving Eq. (4) is nontrivial, and may require iterative methods. To solve Eq. (3), a matrix inversion is inevitable, for which gradient descent (GD) method is usually applied to update [7].
Thus, solving Eq. (4) and (3) is in general cumbersome as inner loops are required to solve these two subminimization problems. More specifically, most of computational complexity comes from the proximity operator due to its intractability and the matrix inversion which is not easy diagonalized.
Motivation to introduce ADMM:
The proposed network structure is inspired by the two iterative steps in ADMM updates to solve an inverse problem.
More specially, the inverse network imitates the process of solving Eq. (3) and the denoising network imitates
an algorithm to solve Eq. (4).
In this work, the inverse network exploits the UNets structure [33] and the denoising network
uses the DAE [43], which will be elaborated in the following section.
3 Approach
The proposed method decomposes the endtoend mapping from data to groundtruth into two mappings, one corresponding to inversion of the physical model and the other corresponding to the regularized denoising. From a Bayesian perspective, the first mapping handles the likelihood associated with data term and the second mapping handles the prior term. As shown in Fig. 2, the observed data is fed into the first network to get an approximation of the groundtruth. Note that is of the same size as and can be regarded as a noisy version of it. The noisy one is then fed into the second network to get refined to the clear version . This twonetwork structure is also echoed from the recent popular refining technique used in [36, 8]. More details about these two networks will be elaborated in the following sections.
3.1 Inversion network mapping to
The inversion network tries to learn the inversion of the degradation which maps the groundtruth to the observed data . Note that while it may be straightforward to write down the closedform solution for subproblem 3 w.r.t. , explicitly computing this solution is nontrivial due to the indefeasibly of inverting a big matrix. Similar to the strategy in [17, 40]
, we design a deep convolutional neural network to learn the inversion. More specifically, we have used a recently developed network structure referred to as UNets
[33], which was originally developed for medical image segmentation. The UNet architecture allows lowlevel information to shortcut by filtering concatenation across different layers, differing from the elementwise addition in ResNet [22]. Extensive experiments have demonstrated the effectiveness of UNets to learn the complex mapping between highdimensional data, such as images
[9, 24].In general, UNets require the input and output to have the same size, for the height and width at least. However, in many inverse problems, for example, superresolution or compressive sensing, only a low dimensional signal is available as the input. In such cases, we initially apply the bicubic interpolation to obtain the same size input which is suitable for the UNets. As a summary, the architecture of our UNets is shown in the left dashed box in Fig.
2.3.2 Denoising network mapping to
The denoising network plays a role to learn from the dataset a signal prior that can deal with any inverse problems. In optimization algorithms for solving inverse problems, signal priors are usually cast in the form of a denoiser, more specifically, a proximity operator [10]. From the geometrical perspective, the proximity operator projects a noisy data point into the feasible sets spanned by the signal prior. Thus, any existing denoising network can be exploited in the proposed framework. In our work, we propose to a specially designed denoising autoencoders with pixel shuffling (or subpixel) trick [37]. Similar to the inversion net, pixel shuffling convolutional network is designed to deal with inputs and outputs of the same size by periodically reordering the pixels in each channel mapping a high resolution image to to the scale as the same as the low dimensional image. The resulting denoising autoencoders does not have a bottleneck shape structure as each layer shares the same filter size. This allows us to transform
into a tensor that has the same size as
, and concatenate this tensor with if desired in the regularization of amortized inference. The detailed architecture of the proposed DAE shown in Fig. 2.3.3 Practical Adversarial Training
From either the perspective of the model structure or the ADMM update rules, and are both approximates of the groundtruth . Inspired by the idea in autoencoding beyond pixels using a learned similarity metric [26]
, standard negative likelihood loss (equivalent to reconstruction loss) incorporating additional generative adversarial loss can practically improve the quality of generated samples. In our work, we also force the outputs of two designed networks to share one discriminator used as a binary classifier, inducing the standard minimax loss
[20] in our model, i.e.,(6)  
(7) 
where is the negative likelihood loss with respect to classification problem, is the nonsaturated generative loss, and “” in Eq. (7) should be substituted by or
to introduce two different generative loss functions.
Besides using the classifier as the discriminator, many works [34, 3] argued that a pretrained regression network to match the high level features between the real images and the fake ones can be significantly better than the discriminative loss in practice and theoretically is more robust from the viewpoint of distribution matching. Therefore, a regression network referred to as comparator
is shared by the outputs of two proposed nets as well. Since the transfer learning is well established in the image domain, such as style learning with feature matching
[19] in VGG [39], we prefer to use the pretrained VGG or AlexNet [25] as the comparator. In this case, to leverage both the adversarial training and the comparator, we do not modify the discriminator loss in Eq. (6), but add an extra feature matching loss to Eq. (7) as(8) 
Analogous to the traditional optimization for generative adversarial loss, and are iteratively minimized with respect to the parameters of the discriminator, UNets, and DAEs.
3.4 Underlying Annealing Training
The proposed adversarial training leads to an underlying annealing training for DAEs where the input (the output of UNets) at early training stage is extremely noisy, and then progressively approaches to a clearer version with further training. This incremental tuning nature in adversarial training allows the training to imitate the inner loop of ADMM updates, where the difference in Eq. (5) gradually becomes constant and negligible. This is the main reason why we call our model as InverseNet
. Unlike the traditional DAEs, in which the input data is contaminated by the predefined noise, e.g., Gaussian noise with fixed variance, the output of UNets or the input of DAEs in the proposed network is contaminated by nonstationary and timevariant noise without an explicit analytic form. If we remove the DAEs part from our model, we found that the performance gradually becomes worse during the training, which can be explained by the instability of adversarial training
[3]. The existence of the DAEs make the network be able to purify the complicated noisy output from the inversion network, leading to a much stable and efficient training.Generality A critical point for learningbased methods is whether the method generalizes to other problems. More specifically, how does a method that is trained on a specific dataset perform when applied to another dataset? To what extent can we reuse the trained network without retraining? Compared with the reusable matrix inversion neural networks learned with pure noise proposed in [40, 17], this inversion network is less flexible due to the joint training of the inversion and denoising network. However, because the learning is based on problem formulation, though not fully reusable, the welltrained neural networks have better generalization ability compared with the other endtoend learning networks. For example, the welltrained networks for image motion deblurring on PASCAL VOC dataset [16]
can be directly applied to the same task on ImageNet
[12] with no necessity of finetuning, if the degrade operator remains the same, since these two datasets are both natural images. However, if a significant domain changes to the dataset, such as MRI, all networksreusable algorithms will fail and require retraining as well as our approach.4 Experiments and Results
In this section, we provide experimental results and analysis on the proposed InverseNet for solving inverse problems. The datasets used in our experiments include CaltechUCSD Birds2002011 (CUB) [44], Largescale CelebFaces Attributes (CelebA) [29], PASCAL VOC and ImageNet [25], and the learning tasks considered include motion deblurring, superresolution, and joint superresolution and colorization. Note that the degraded operator in unknown during training, and is only used to generate the measurement and groundtruth paired data. Our code will be available on the repository https://github.com/.
4.1 Motion Deblurring
PSNR  CUB  CelebA  ImageNet 

Wiener filter (baseline)  22.42  20.92  20.44 
Robust motion deblur [46]  25.03  25.06  23.37 
Neural motion deblur [6]  25.73  25.76  24.74 
Pix2Pix [23] w. comparator  23.67  23.59  22.05 
Ours  28.39  34.02  28.87 
SSIM  CUB  CelebA  ImageNet 
Wiener filter (baseline)  0.6572  0.7020  0.6357 
Robust motion deblur [46]  0.7459  0.8052  0.7283 
Neural motion deblur [6]  0.8853  0.9649  0.9074 
Pix2Pix [23] w. comparator  0.7554  0.8553  0.7335 
Ours  0.9421  0.9738  0.9446 






The target of motion deblurring is to recover a sharp nonblurred image from a single motionblurred image, which has been a fundamental problem in computational imaging [46]. The motion deblurring is challenging as both the blurring kernel and the latent nonblurred images are unknown in most cases, leading to an illposed inverse problem. Note that the size of an image after motion deblurring remains the same. Thus, any resizing of the input images, e.g., bicubic interpolation or pixel shuffling for the initialization of our approach is unnecessary and can be simply removed from the pipeline. We trained our model on all datasets with size images without any other preprocessing.
Qualitative Results The proposed InverseNet is compared with the baseline optimization methods by Wiener filtering, the robust nonblind motion deblurring [46], and another deep learning approach method neural motion deblurring [6] visually in Fig. 3. Note that in [46], the algorithm for a single image motion deblurring requires to feed the blurring kernel, while other methods including ours are all blind recoveries. As shown in Fig. 3, the traditional single image processing algorithms including Wiener filtering and robust motion deblurring suffered heavily from ring artifacts while the neural motion deblurring and the proposed InverseNet gave much better and highquality recoveries. The ring artifact can be explained by the fact that the chosen convolutional kernel to blur images has larger variance and is more challenging.
Quantitative Results The PSNR and SSIM were calculated on the same testing datasets across all compared algorithms and summarized in Table. 1. Clearly, the proposed InverseNet outperformed the other methods with a significant improvement, which is in consistence with the qualitative results in Fig. 3.
Transfer Training We also checked to what extent the welltrained model could be adapted to other datasets. Specifically, we trained the InverseNet on PASCAL VOC and tested it on CUB, celebA and ImageNet, with the results shown in Fig. 5. The quantitative results is shown in Table 2. As we argued before, the similar image domain between ImageNet and VOC leads the PSNR does not decrease much, i.e., from 28.87 to 28.52, compared with using the InverseNet trained on ImageNet itself. This demonstrated the generality of the learned InverseNet.
Metric  CUB  CelebA  ImageNet 

PSNR  25.65  27.90  28.52 
SSIM  0.9105  0.9373  0.9303 
Annealing Training We also validated the importance of annealing training on the datasets (more intensive results are available in the supplementary materials). If we only trained the model with UNets with a discriminator (this will recover the model pix2pix [23]) and a comparator, the training was usually very unstable and we have to do cherrypick from results in every iteration. However, with the help of refining network DAEs, the performance was boosted up significantly, reflected by both the quantitative results summarized in Table. 1 and the qualitative results displayed in Fig. 1. Additionally, we also visualized the residual images between the output of UNets and the final output of DAEs or the groundtruth in Fig. 4
. The pixel level noise approximately followed a Gaussian distribution at each iteration, but with varying mean and variance, leading to a nonstationary and time variant noise. It was observed that the mean and variance gradually became smaller and stable at the final training stage. This potentially enables the DAEs to depress noise with variances of different scales added to the input, which is different from traditional DAEs, and allows more robust training.
4.2 SuperResolution
Superresolution is another popular inverse problem in image processing, and it aims at the restoration of high frequencies to enrich details in an image based on a set of prior examples with low resolution (LR) and corresponding high resolution (HR) images. The degraded operator can source from quantization error, limitation of the sensor from the capturing camera, the presence of blurriness and the use of downsampling operators to reduce the image resolution for storage purposes. It is well known that superresolution is illposed, since for each LR image the space of corresponding HR images can be very large.
In this experiment, we first synthesized a challenge downsampling operator , which was a channelwise strided convolution and may result in the ring artifact on low resolution images (see detailed form in supplementary materials) and trained the superresolution model on two datasets, CUB and ImageNet. Our task is trying to superresolve images from to . In addition, we also compared with SRGAN [28] in supplementary materials.
Ring Artifact Remover
The difficulty of the superresolution is highly affected by the shape and scale of the convolution kernel in the degradation model. Convolving an image with a wider and flatter convolution kernel leads to more aliasing of high frequency information in spectral domain thus leads to a more challenging problem. Usually, ring artifact can be resulted from this heavy aliasing on the spectral domain. On the contrary, a narrower and sharper convolutional kernel results to less information loss thus is less challenging. In this task, we used a flat kernel of size which can cause heavy ring artifact. Fig. 6 shows the superresolution results on heldout testing ImageNet dataset. In this case, the bicubic interpolation failed to remove the ring effects produced during dowmsampling. However, our endtoend trained InverseNet can well tackle this problem by greatly depressing the ring effect.
4.3 Jointly SuperResolution and Colorization
A more challenging task is to enhance both spectral and spatial resolutions from one single band image. More specifically, the enhancement is a combination of superresolution and colorization (hallucinating a plausible color version of a colorless image), which can be considered as the compound of two degraded operators. Thus, our single channel colorless low resolution image was obtained by convolving a kernel of size and downsampling with a stride 2 in both horizontal and vertical directions. Note that this is different from and more challenging than the conventional multiband image fusion, e.g., fusing a multispectral image and a high spatial panchromatic image (also referred to as pansharpening [30]), in the sense that only one lowspatial lowspectral resolution image is available. Additionally, our inverse task is also different from the traditional colorization in the CIE Lab color space. In this section, we have tested the ability to perform joint superresolution and colorization from one single colorless LR image on the dataset celebA and CUB. The celebA mainly includes faces images with less color variance, so it is an easier dataset compared with CUB wild bird images . The results are displayed in Fig. 7
. Visually, the joint superresolved and colorized images are very similar with their groundtruth with variances for some details. It is especially interesting to observe that the restored highresolution images looked more natural than the groundtruth of some ‘outlier’ images. This results from the fact that the model was trained from massive images in which most look normal. For example, the person in the third column had purple hairs which rarely appeared in the dataset. Feeding its degraded version, i.e., a blurred colorless image to the trained
InverseNetgave a face with most probable, popular and thus ‘natural’ brown hairs as shown in the second row. More results are available in the supplementary materials.
5 Conclusion
In this paper we proposed the InverseNet to solve inverse problems by endtoend mapping. To take the advantage of the efficiency (in testing phase) of endtoend learning and the flexibility brought by splitting strategy simultaneously, the mapping was decomposed into two neural networks, i.e., the inversion network and the denoising one. The former one was designed to learn the inversion of the physical forward model associated with the data term and the latter one was to learn a proximity operator or a projection onto the groundtruth signal space associated with the prior term. The two pretrained deep neural networks were trained jointly using prepared data pairs , getting rid of any twostep separate training. Experiments and analysis on various datasets demonstrated the efficiency and accuracy of the proposed method. In future work we hope to further extend the proposed method to tackle the ‘learn to learn’ problem.
References
 [1] J. Adler and O. Öktem. Solving illposed inverse problems using iterative deep neural networks. arXiv preprint arXiv:1704.04058, 2017.
 [2] M. V. Afonso, J. M. BioucasDias, and M. A. Figueiredo. Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process., 19(9):2345–2356, 2010.
 [3] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.

[4]
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein.
Distributed optimization and statistical learning via the alternating
direction method of multipliers.
Foundations and Trends® in Machine Learning
, 3(1):1–122, 2011.  [5] J. Bruna, P. Sprechmann, and Y. LeCun. Superresolution with deep convolutional sufficient statistics. arXiv preprint arXiv:1511.05666, 2015.
 [6] A. Chakrabarti. A neural approach to blind motion deblurring. In European Conference on Computer Vision, pages 221–235. Springer, 2016.
 [7] J. Chang, C.L. Li, B. Poczos, B. Kumar, and A. C. Sankaranarayanan. One network to solve them all—solving linear inverse problems using deep projection models. arXiv preprint arXiv:1703.09912, 2017.
 [8] Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. arXiv preprint arXiv:1707.09405, 2017.
 [9] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger. 3d unet: learning dense volumetric segmentation from sparse annotation. In International Conference on Medical Image Computing and ComputerAssisted Intervention, pages 424–432. Springer, 2016.
 [10] P. Combettes and J.C. Pesquet. Proximal splitting methods in signal processing. In H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz, editors, FixedPoint Algorithms for Inverse Problems in Science and Engineering, Springer Optimization and Its Applications, pages 185–212. Springer New York, 2011.
 [11] B. C. Csáji. Approximation with artificial neural networks. Faculty of Sciences, Etvs Lornd University, Hungary, 24:48, 2001.

[12]
J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei.
Imagenet: A largescale hierarchical image database.
In
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on
, pages 248–255. IEEE, 2009.  [13] L. Deng, D. Yu, et al. Deep learning: methods and applications. Foundations and Trends® in Signal Processing, 7(3–4):197–387, 2014.
 [14] C. Dong, C. C. Loy, K. He, and X. Tang. Image superresolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2):295–307, Feb 2016.
 [15] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process., 15(12):3736–3745, 2006.
 [16] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
 [17] K. Fan, Q. Wei, L. Carin, and K. Heller. An innerloop free solution to inverse problems using deep neural networks. In Advances in Neural Information Processing Systems, Long Beach, CA, USA, Dec. 2017.
 [18] M. A. Figueiredo, R. D. Nowak, and S. J. Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Topics Signal Process., 1(4):586–597, 2007.
 [19] L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.
 [20] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
 [21] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning (ICML10), pages 399–406, 2010.
 [22] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE Int. Conf. Comp. Vision and Pattern Recognition (CVPR), June 2016.
 [23] P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros. Imagetoimage translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016.
 [24] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser. Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process., 26(9):4509–4522, Sept 2017.
 [25] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
 [26] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of the 33rd International Conference on International Conference on Machine LearningVolume 48, pages 1558–1566. JMLR. org, 2016.
 [27] G. Larsson, M. Maire, and G. Shakhnarovich. Learning representations for automatic colorization. In Proc. European Conf. Comp. Vision (ECCV), pages 577–593. Springer, 2016.
 [28] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photorealistic single image superresolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
 [29] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proc. IEEE Int. Conf. Comp. Vision (ICCV), pages 3730–3738, 2015.
 [30] L. Loncan, L. B. Almeida, J. M. BioucasDias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Liao, G. Licciardi, M. Simoes, J.Y. Tourneret, M. Veganzones, G. Vivone, Q. Wei, and N. Yokoya. Hyperspectral pansharpening: a review. IEEE Geosci. Remote Sens. Mag., 3(3):27–46, Sept. 2015.
 [31] S. Lu, M. Hong, and Z. Wang. A nonconvex splitting method for symmetric nonnegative matrix factorization: Convergence analysis and optimality. IEEE Transactions on Signal Processing, 65(12):3120–3135, June 2017.
 [32] Y. Lu, M. Inamura, and M. del Carmen Valdes. Superresolution of the undersampled and subpixel shifted image sequence by a neural network. International Journal of Imaging Systems and Technology, 14(1):8–15, 2004.
 [33] O. Ronneberger, P. Fischer, and T. Brox. Unet: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and ComputerAssisted Intervention, pages 234–241. Springer, 2015.
 [34] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
 [35] J. Schlemper, J. Caballero, J. V. Hajnal, A. Price, and D. Rueckert. A deep cascade of convolutional neural networks for MR image reconstruction. arXiv preprint arXiv:1703.00555, 2017.
 [36] S. Shankar, D. Robertson, Y. Ioannou, A. Criminisi, and R. Cipolla. Refining architectures of deep convolutional neural networks. In Proc. IEEE Int. Conf. Comp. Vision and Pattern Recognition (CVPR), pages 2212–2220, Las Vegas, NV, USA, 2016.
 [37] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Realtime single image and video superresolution using an efficient subpixel convolutional neural network. In Proc. IEEE Int. Conf. Comp. Vision and Pattern Recognition (CVPR), pages 1874–1883, 2016.
 [38] M. Simoes, J. BioucasDias, L. Almeida, and J. Chanussot. A convex formulation for hyperspectral image superresolution via subspacebased regularization. IEEE Trans. Geosci. Remote Sens., 53(6):3373–3388, Jun. 2015.
 [39] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 [40] C. K. Sønderby, J. Caballero, L. Theis, W. Shi, and F. Huszár. Amortised MAP inference for image superresolution. arXiv preprint arXiv:1610.04490, 2016.
 [41] A. Tarantola. Inverse problem theory and methods for model parameter estimation. SIAM, 2005.
 [42] A. Tikhonov and V. Arsenin. Solutions of illposed problems. Scripta series in mathematics. Winston, 1977.
 [43] P. Vincent, H. Larochelle, Y. Bengio, and P.A. Manzagol. Extracting and composing robust features with denoising autoencoders. In Proc. Int. Conf. Machine Learning (ICML), pages 1096–1103. ACM, 2008.
 [44] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltechucsd birds2002011 dataset. 2011.
 [45] Q. Wei, N. Dobigeon, and J.Y. Tourneret. Bayesian fusion of multiband images. IEEE J. Sel. Topics Signal Process., 9(6):1117–1127, Sept. 2015.
 [46] L. Xu and J. Jia. Twophase kernel estimation for robust motion deblurring. In European Conference on Computer Vision, pages 157–170. Springer, 2010.
 [47] J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image superresolution via sparse representation. IEEE Trans. Image Process., 19(11):2861–2873, 2010.
 [48] N. Zhao, Q. Wei, A. Basarab, N. Dobigeon, D. Kouamé, and J. Y. Tourneret. Fast single image superresolution using a new analytical solution for problems. IEEE Trans. Image Process., 25(8):3683–3697, Aug. 2016.
Appendix A Training details
a.1 Adversarial training algorithm
We define a soft GAN loss with the following form
(9) 
where is the soft label for data . The explicit forms of the Discriminator (D) loss and the generator (G) loss are
(10)  
(11)  
(12) 
where is the output of UNets, is the output of DAE and is the comparator output. In our experiment, we used FC6 layer of the AlexNet. The tunable hyperparameters were used in our experiment. Experimentally, this soft loss can alleviate the vulnerability of neural networks to adversarial examples.
Instead of pretraining the generator as in SRGAN, we jointly trained the whole model, i.e., the generator and discriminator simultaneously. The detailed optimization updates for adversarial training are summarized in Algorithm 1. We chose in all experiments, and Adam [kingma2014adam] optimizer was applied in the gradient descent for all parameter updates. For Adam, the learning rate is , and
. In addition, the gradient is clipped by norm 5 for each net. The batch size of each iteration is 36 for all experiments. For batch normalization, momentum is 0.9 and
.a.2 Model structure
In Table 3, we describe the model structure for UNets with input of size . The structure of DAEs is summarized in Table 4). Beside of the pixel shuffling layer, all the other layers can share the same structure as the sizes, i.e., the height and width of an input image and an output image are the same.
Layer Name  Dimension  Layer Operations 

data  Conv()‘SAME’  
e1  Leaky ReluConv( )‘SAME’Batch_Norm 

e2  Leaky ReluConv()‘SAME’Batch_Norm  
e3  Leaky ReluConv()‘SAME’Batch_Norm  
e4  Leaky ReluConv()‘SAME’Batch_Norm  
e5  Leaky ReluConv()‘SAME’Batch_Norm  
e6  Leaky ReluConv()‘SAME’Batch_Norm  
e7  Leaky ReluConv()‘SAME’Batch_Norm  
e8  ReluConv_Trans()‘SAME’Batch_NormCancat(e7)  
d1  ReluConv_Trans()‘SAME’Batch_NormCancat(e6)  
d2  ReluConv_Trans()‘SAME’Batch_NormCancat(e5)  
d3  ReluConv_Trans()‘SAME’Batch_NormCancat(e4)  
d4  ReluConv_Trans()‘SAME’Batch_NormCancat(e3)  
d5  ReluConv_Trans()‘SAME’Batch_NormCancat(e2)  
d6  ReluConv_Trans()‘SAME’Batch_NormCancat(e1)  
d7  ReluConv_Trans()‘SAME’Tanh  
d8 
Input Dimension  Layer  Output Dimension 

periodical pixel shuffling  
Conv()‘SAME’Batch_NormRelu  
Conv()‘SAME’Batch_NormRelu  
Conv()‘SAME’Batch_NormRelu  
Concatenate in Channel  
Conv()‘SAME’Batch_NormRelu  
Conv()‘SAME’Batch_NormRelu  
Conv()‘SAME’Batch_NormRelu  
periodical pixel shuffling 
Appendix B More results for motion deblurring
We tested our motion deblurred model on another natural image as shown in Fig. 8. The model was trained on patches from ImageNet, and tested on one Wonder Woman poster image with size and . We also compared it with the blind neural motion deblurring algorithm [6] on ^{2}^{2}2We used the codes offered by the authors in https://github.com/ayanc/ndeblur.. As shown in Fig. 8, the deblurred image using InverseNet is much more clear and closer to the original image than the neural deblurred one. In the size deblurring, visually, the restored image by InverseNet is almost exactly the same with the original one as displayed in Fig. 9.
Appendix C More results for ring artifact remover superresolution
We include the results of SRGAN [28] for comparison as shown in Fig. 10. For SRGAN, except downgraded kernel, we follow the training instruction in the paper by first pretraining the generator for iterations and then finetuning the model with discriminator included for another iterations. Since the author did not release the codes, this implementation was based on our own and we tried our best to fine tune these parameters to the best performance. As shown in Fig. 10, the superresolved result by SRGAN did not remove the ring effect but could sharpen some details of images. On the contrary, the results of InverseNet successfully removed the ring artifact while keeping the details.
Appendix D Visualization of degraded kernel
d.1 Motion deblurring
The degradation matrix
in motion deblurring is a square matrix corresponding to a 2D convolution. If the convolution is implemented with periodic boundary conditions, i.e., the pixels out of an image is padded with periodic extension of itself, the matrix
is a block circulant matrix with circulant blocks (BCCB). The first row of the matrixis the motion blurring convolution kernel and the other row vectors are rotated one element to the right relative to the preceding row vector. The
motion blurring convolution kernel is displayed as below.d.2 Superresolution
The degradation matrix in superresolution can be decomposed as the multiplication of a convolution matrix (as in motion blurring) and a downsampling matrix , i.e., . The downsampling matrix represents the regular 2D decimation by keeping one pixel every pixels in both horizontal and vertical directions, where is the sampling ratio. The matrix is equivalent to the strided convolution widely used in CNN architecture. The convolution kernel used in our experiment is displayed as below. The downsampling ratio is fixed to 4 in the experiments, corresponding to shrinking the size of an image from to .
d.3 Joint superresolution and colorization
The degradation matrix in joint superresolution and colorization can be decomposed as the product of a convolution matrix , a downsampling matrix and a spectral degradation matrix . The matrices and are similarly defined as in Section D.2. The main difference here is that the kernel of is channel wise in the sense that the convolution kernel for each channel is different, as shown in Fig. 13. The role of spectral degradation is making the average of the R, G, B channels to get a onechannel grayscale image.