    # Algorithmic Aspects of Inverse Problems Using Generative Models

The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by generative adversarial networks, or GANs). In this work, we study the algorithmic aspects of such a learning-based approach from a theoretical perspective. For certain generative network architectures, we establish a simple non-convex algorithmic approach that (a) theoretically enjoys linear convergence guarantees for certain inverse problems, and (b) empirically improves upon conventional techniques such as back-propagation. We also propose an extension of our approach that can handle model mismatch (i.e., situations where the generative network prior is not exactly applicable.) Together, our contributions serve as building blocks towards a more complete algorithmic understanding of generative models in inverse problems.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

### I-a Motivation

Inverse problems arise in a diverse range of application domains including computational imaging, optics, astrophysics, and seismic geo-exploration. In each of these applications, there is a target signal or image (or some other quantity of interest) to be obtained; a device (or some other physical process) records measurements of the target; and the goal is to reconstruct an estimate of the signal from the observations.

Let us suppose that denotes the signal of interest and denotes the observed measurements. When the inverse problem is ill-posed, and some kind of prior (or regularizer) is necessary to obtain a meaningful solution. A common technique used to solve ill-posed inverse problems is to solve a constrained optimization problem:

 ˆx =argmin F(x), (1) s. t.   x∈S,

where is an objective function that typically depends on and , and captures some sort of structure that is assumed to obey.

A very common modeling assumption, particularly in signal and image processing applications, is sparsity, wherein

is the set of sparse vectors in some (known) basis representation. The now-popular framework of

compressive sensing studies the special case where the forward measurement operator can be modeled as a linear operator that satisfies certain (restricted) stability properties; when this is the case, accurate estimation of can be performed provided the signal is sufficiently sparse . Parallel to the development of algorithms that leverage sparsity priors, the last decade has witnessed analogous approaches for other families of structural constraints. These include structured sparsity [2, 3], union-of-subspaces , dictionary models [5, 6], total variation models , analytical transforms , among many others.

Lately, there has been renewed interest in prior models that are parametrically defined in terms of a

deep neural network

. We call these generative network models. Specifically, we define

 S={x∈Rn | x=G(z), z∈Rk}

where is a -dimensional latent parameter vector and is parameterized by the weights and biases of a -layer neural network. One way to obtain such a model is to train a generative adversarial network (GAN) . A well-trained GAN closely captures the notion of a signal (or image) being ‘natural’ , leading many to speculate that the range of such generative models can approximate a low-manifold containing naturally occurring images. Indeed, GAN-based neural network learning algorithms have been successfully employed to solve linear

inverse problems such as image super-resolution and inpainting

[11, 12]

. However, most of these approaches are heuristic, and a general theoretical framework for analyzing the performance of such approaches is not available at the moment.

### I-B Contributions

Our focus in this paper is to take some initial steps into building such a theoretical framework. Specifically, we wish to understand the algorithmic costs involving in using generative network models for inverse problems: how computationally challenging they are, whether they provably succeed, and how to make such models robust.

The starting point of our work is the recent, seminal paper by , who study the benefits of using generative models in the context of compressive sensing. In this paper, the authors pose the estimated target as the solution to a non-convex optimization problem and establish upper bounds on the statistical complexity of obtaining a “good enough” solution. Specifically, they prove that if the generative network is a mapping simulated by a -layer neural network with width

and with activation functions obeying certain properties, then

random observations are sufficient to obtain a good enough reconstruction estimate. However, the authors do not study the algorithmic costs of solving the optimization problem, and standard results in non-convex optimization are sufficient to only obtain sublinear convergence rates. In earlier work , we established an algorithm with linear convergence rates for the same (compressive sensing) setup, and demonstrated its empirical benefits.

However, the earlier work  only provided an algorothm (and analysis) for linear inverse problems. In this work, we generalize this to a much wider range of nonlinear inverse problems. Using standard techniques, we propose a generic algorithm for solving (1), analyze its performance, and prove that it demonstrates linear convergence. This constitutes Contribution I of this paper.

A drawback of  (and our subsequent work 

) is the inability to deal with targets that are outside the range of the generative network model. This is not merely an artifact of their analysis; generative networks are rigid in the sense that once they are learned, they are incapable of reproducing any target outside their range. (This is in contrast with other popular parametric models such as sparsity models; these exhibit a “graceful decay” property in the sense that if the sparsity parameter

is large enough, such models capture all possible points in the target space.) This issue is addressed, and empirically resolved, in the recent work of  who propose a hybrid model combining both generative networks and sparsity. This leads to a non-convex optimization framework (called SparseGen) which the authors theoretically analyze to obtain analogous statistical complexity results. However, here too, the theoretical contribution is primarily statistical and the algorithmic aspects of their setup are not discussed.

We address this gap, and propose an alternative algorithm for this framework. Our algorithm is new, and is a nonlinear extension of our previous work [16, 17]. Under (fairly) standard assumptions, this algorithm also can be shown to demonstrate linear convergence. This constitutes Contribution II of this paper.

In summary: we complement the work of  and  by providing algorithmic upper bounds for the corresponding problems that are studied in those works. Together, our contributions serve as further building blocks towards an algorithmic theory of generative models in inverse problems.

### I-C Techniques

At a high level, our algorithms are standard. The primary novelty is in their applications to generative network models, and some aspects of their theoretical analysis.

Suppose that is the generative network model under consideration. The cornerstone of our analysis is the assumption of an -approximate (Euclidean) projection oracle onto the range of . In words, we pre-suppose the availability of a computational routine that, given any vector , can return a vector that approximately minimizes . The availability of this oracle, of course, depends on and we comment on how to construct such oracles below in Section IV.

The first algorithm (for solving (1)) is a straightforward application of projected gradient descent, and is a direct nonlinear generalization of the approach proposed in . The main difficulty is in analyzing the algorithm and proving linear convergence. To show this, we assume that the objective function in (1) obeys the Restricted Strong Convexity/Smoothness assumptions . With this assumption, proof of convergence follows from a straightforward modification of the proof given in .

The second algorithm (for handling model mismatch in the target) is new. The main idea (following the lead of ) is to pose the target as the superposition of two components: , where can be viewed as an “innovation” term that is -sparse in some fixed, known basis . The goal is now to recover both and . This is reminiscent of the problem of source separation or signal demixing , and in our previous work [17, 21] we proposed greedy iterative algorithms for solving such demixing problems. We extend this work by proving a nonlinear extension, together with a new analysis, of the algorithm proposed in .

## Ii Background and Related Work

### Ii-a Inverse problems

The study of solving inverse problems has a long history. As discussed above, the general approach to solve an ill-posed inverse problem of the form depicted in Figure 1 is to assumes that the target signal/image obeys a prior. Older works mainly used hand-crafted signal priors; for example, [22, 23, 24] employ sparsity priors, and applied them in linear

inverse problems such as super-resolution, denoising, compressive sensing, and missing entry interpolation.

### Ii-B Neural network models

The last few years have witnessed the emergence of trained neural networks for solving such problems. The main idea is to eschew hand-crafting any priors, and instead learn an end-to-end mapping from the measurement space to the image space. This mapping is simulated via a deep neural network, whose weights are learned from a large dataset of input-output training examples . The works [26, 27, 28, 29, 30, 31, 32] have used this approach to solve several types of (linear) inverse problems, and has met with considerable success. However, the major limitations are that a new network has to be trained for each new linear inverse problem; moreover, most of these methods lack concrete theoretical guarantees. An exception of this line of work is the powerful framework of , which does not require retraining for each new problem; however, this too is not accompanied by theoretical analysis of statistical and computational costs.

### Ii-C Generative networks

A special class of neural networks that attempt to directly model the distribution of the input training samples are known as generative adversarial training networks, or GANs . GANs have been shown to provide visually striking results [34, 35, 10, 36]. The use of GANs to solve linear inverse problems was advocated in . Specifically, given (noisy) linear observations of a signal , assuming that belongs to the range of a generative network , this approach constructs the reconstructed estimate as follows:

 ^z=argminz∈Rk∥y−AG(z)∥22,  ^x=G(^z)

If the observation matrix comprises i.i.d. Gaussian measurements, then together with regularity assumptions on the generative network, they prove that the solution satisfies:

 ∥x−^x∥2≤C∥e∥2.

for some constant that can be reliably upper-bounded. In particular, in the absence of noise the recovery of is exact. However, there is no discussion of how computationally expensive this procedure is. Observe that the above minimization is highly non-convex (since for any reasonable neural network,

is a non-convex function) and possibly also non-smooth (if the neural network contains non-smooth activation functions, such as rectified linear units, or ReLUs). More recently,

 improve upon the approach in  for solving more general nonlinear inverse problems (in particular, any inverse problem which has a computable derivative). Their approach involves simultaneously solving the inverse problem and training the network parameters; however, the treatment here is mostly empirical and a theoretical analysis is not provided.

Under similar statistical assumptions as , the work of  provably establishes a linear convergence rate, provided that a projection oracle (on to the range of ) is available, but only for the special case of compressive sensing. Our first main result (Contribution I) extends this algorithm (and analysis) to more general nonlinear inverse problems.

### Ii-D Model mismatch

A limitation of most generative network models is that they can only reproduce estimates that are within their range; adding more observations or tweaking algorithmic parameters are completely ineffective if a generative network model is presented with a target that is far away from the range of the model. To resolve this type of model mismatch, the authors of  propose to model the signal as the superposition of two components: a “base” signal , and an “innovation” signal , where is a known ortho-basis and is an -sparse vector. In the context of compressive sensing, the authors of  solve a sparsity-regularized loss minimization problem:

 (^z,^v)=argminz,v∥∥BTv∥∥1+λ∥y−A(G(z)+v)∥22.

and prove that the reconstructed estimate is close enough to provided measurements are sufficient. However, as before, the algorithmic costs of solving the above problem are not discussed. Our second main result (Contribution II) proposes a new algorithm for dealing with model mismatches in generative network modeling, together with an analysis of its convergence and iteration complexity.

## Iii Main Algorithm and Analysis

Let us first establish some notational conventions. Below, will denote the Euclidean norm unless explicitly specified. We use -notation in several places in order to avoid duplication of constants.

We use to denote a (scalar) objective function. We assume that has a continuous gradient which can be evaluated at any point .

### Iii-a Definitions and assumptions

We now present certain definitions that will be useful for our algorithms and analysis.

###### Definition 1 (Approximate projection)

A function is an -approximate projection oracle if for all , obeys:

We will assume that for any given generative network of interest, such a function exists and is computationally tractable111This may be a very strong assumption, but at the moment we do not know how to relax this. Indeed, the computational complexity of our proposed algorithms are directly proportional to the complexity of such a projection oracle.. Here, is a parameter that is known a priori.

###### Definition 2 (Restricted Strong Convexity/Smoothness)

Assume that satisfies :

 α2∥x−y∥22≤F(y)−F(x)−⟨∇F(x),y−x⟩≤β2∥x−y∥22.

for positive constants .

This assumption is by now standard; see [18, 19] for in-depth discussions. This means that the objective function is strongly convex / strongly smooth along certain directions in the parameter space (in particular, those restricted to the set of interest). The parameter is called the restricted strong convexity (RSC) constant, while the parameter is called the restricted strong smoothness (RSS) constant. Clearly, . In fact, throughout the paper, we assume that which is a fairly stringent assumption but again, one that we do not know at the moment how to relax.

###### Definition 3 (Incoherence)

A basis and are called -incoherent if for all and all , we have:

 |⟨u−u′,v−v′⟩|≤μ∥∥u−u′∥∥2∥∥v−v′∥∥2.

for some parameter .

###### Remark 1

In addition to the above, we will make the following assumptions in order to aid the analysis. Below, and are positive constants.

• (gradient at the minimizer is small).

• (range of is compact).

• .

### Iii-B Contribution I: An algorithm for nonlinear inverse problems using generative networks

We now present our first main result. Recall that we wish to solve the problem:

 ˆx =argmin F(x), (2) s. t.   x∈Range(G),

where is a generative network. To do so, we employ projected gradient descent using the -approximate projection oracle for . The algorithm is described in Alg. 1. We obtain the following theoretical result:

###### Theorem 1

If satisfies RSC/RSS over with constants and , then -PGD (Alg. 1) convergences linearly up to a ball of radius .

 F(xt+1)−F(x∗)≤(2−βα)(F(xt)−F(x∗))+O(ε).

The proof is a minor modification of that in . For simplicity we will assume that refers to the Euclidean norm. Let us suppose that the step size . Define

 zt=xt−η∇F(xt).

 F(xt+1)−F(xt) ≤⟨∇F(xt),xt+1−xt⟩+β2∥xt+1−xt∥2 =1η⟨xt−zt,xt+1−xt⟩+β2∥xt+1−xt∥2 =β2(∥xt+1−xt∥2+2⟨xt−zt,xt+1−xt⟩+∥xt−zt∥2) −β2∥xt−zt∥2 =β2(∥xt+1−zt∥2−∥xt−zt∥2),

where the last few steps are consequences of straightforward algebraic manipulation.

Now, since is an -approximate projection of onto and , we have:

Therefore, we get:

 F(xt+1)−F(xt) ≤β2(∥x∗−zt∥2−∥xt−zt∥2)+βε2 =β2(∥x∗−xt+η∇F(xt)∥2−∥η∇F(xt)∥2)+βε2 =β2(∥x∗−xt∥2+2η⟨x∗−xt,∇F(xt)⟩)+βε2 =β2∥x∗−xt∥2+⟨x∗−xt,∇F(xt)⟩+βε2.

However, due to RSC, we have:

 α2∥x∗−xt∥2 ≤F(x∗)−F(xt)−⟨x∗−xt,∇F(xt)⟩, ⟨x∗−xt,∇F(xt)⟩ ≤F(x∗)−F(xt)−α2∥x∗−xt∥2.

Therefore,

 F(xt+1)−F(xt) ≤β−α2∥x∗−xt∥2+F(x∗)−F(xt)+βε2 ≤β−α2⋅2α(F(xt)−F(x∗)−⟨xt−x∗,∇F(x∗)⟩) +F(x∗)−F(xt)+βε2 ≤(2−βα)(F(x∗)−F(xt))+β−ααγΔ+βε2,

where the last inequality follows from Cauchy-Schwartz and the assumptions on and the diameter of . Further, by assumption, . Rearranging terms, we get:

 F(xt+1)−F(x∗)≤(βα−1)(F(xt)−F(x∗))+Cε.

for some constant .

This theorem asserts that the distance between the objective function at any iteration to the optimum decreases by a constant factor in every iteration. (The decay factor is , which by assumption is a number between 0 and 1). Therefore, we immediately obtain linear convergence of -PGD up to a ball of radius :

###### Corollary 1

After iterations, ) .

Therefore, the overall running time can be bounded as follows:

 Runtime≤(Tε−\textscProj+T∇)×log(1/ε).

See  for empirical evaluations of PGD applied to a linear inverse problem (compressed sensing recovery).

### Iii-C Contribution II: Addressing signal model mismatch

We now generalize the -PGD algorithm to handle situations involving signal model mismatch. Assume that the target signal can be decomposed as:

 x=G(z)+v,

where for some ortho-basis .

For this model, we attempt to solve a (slightly) different optimization problem:

 ˆx =argmin F(x), (3) s. t.   x=G(z)+v,, ∥∥BTv∥∥0≤l. (4)

We propose a new algorithm to solve this problem that we call Myoptic -PGD. This algorithm is given in Alg. 2222The algorithm is a variant of block-coordinate descent, except that the block updates share the same gradient term..

###### Theorem 2

Let denote the Minkowski sum. If satisfies RSC/RSS over with constants and , and if we assume -incoherence between and , we have:

 F(xt+1)−F(x∗)≤⎛⎜⎝2−βα1−2.5μ1−μ1−β2αμ1−μ⎞⎟⎠(F(xt)−F(x∗))+O(ε).

We will generalize the proof technique of . We first define some auxiliary variables that help us with the proof. Let:

 zt =xt−η∇F(xt), zut =ut−η∇F(xt), zvt =vt−η∇F(xt).

and let be the minimizer that we seek. As above, by invoking RSS and with some algebra, we obtain:

 F(xt+1)−F(x∗)≤β2(∥xt+1−zt∥2−∥xt−zt∥2), (5)

However, by definition,

 xt+1 =ut+1+vt+1, xt =ut+vt.

Therefore,

 ∥xt+1−zt∥2 =∥ut+1−(ut−η∇F(xt))+ vt+1−(vt−η∇F(xt))+η∇F(xt)∥2 =∥ut+1−(ut−η∇F(xt))∥2+∥η∇F(xt)∥2+ ∥vt+1−(vt−η∇F(xt))∥2 +2⟨ut+1−(ut−η∇F(xt)),η∇F(xt)⟩ +2⟨vt+1−(vt−η∇F(xt)),η∇F(xt)⟩ +2⟨ut+1−(ut−η∇F(xt)),vt+1−(vt−η∇F(xt))⟩.

But is an -projection of and is in the range of , we have:

 ∥∥ut+1−zut∥∥2≤∥∥u∗−zut∥∥2+ε.

Similarly, since is an -sparse thresholded version of , we have:

 ∥∥vt+1−zvt∥∥2≤∥∥v∗−zvt∥∥2.

Plugging in these two upper bounds, we get:

 ∥xt+1−zt∥2 ≤∥u∗−(ut−η∇F(xt))∥2+ε ∥η∇F(xt)∥2+∥v∗−(vt−η∇F(xt))∥2 +2⟨ut+1−(ut−η∇F(xt)),η∇F(xt)⟩ +2⟨vt+1−(vt−η∇F(xt)),η∇F(xt)⟩ +2⟨ut+1−(ut−η∇F(xt)),vt+1−(vt−η∇F(xt))⟩.

Expanding squares and cancelling (several) terms, the right hand side of the above inequality can be simplified to obtain:

 ∥xt+1−zt∥2 ≤∥u∗+v∗−zt∥2+ε +2⟨ut+1−ut,vt+1−vt⟩−2⟨u∗−ut,v∗−vt⟩ =∥x∗−zt∥2+ε+2⟨ut+1−ut,vt+1−vt⟩ −2⟨u∗−ut,v∗−vt⟩.

Plugging this into (5), we get:

 F(xt+1)−F(x∗) ≤β2(∥x∗−zt∥2−∥xt−zt∥2)T1 +β(⟨ut+1−ut,vt+1−vt⟩−2⟨u∗−ut,v∗−vt⟩)T2 +βε2.

We already know how to bound the first term , using an identical argument as in the proof of Theorem 1. We get:

 T1≤(2−βα)(F(x∗)−F(xt))+β−ααγΔ.

The second term can be bounded as follows. First, observe that

 |⟨ut+1−ut,vt+1⟩| ≤μ∥ut+1−ut∥∥vt+1−vt∥ ≤μ2(∥ut+1−ut∥2+∥vt+1−vt∥2) ≤μ2(∥ut+1+vt+1−ut−vt∥2) +μ|⟨ut+1−ut,vt+1−vt⟩|.

This gives us the following inequalities:

 |⟨ut+1−ut,vt+1⟩| ≤μ2(1−μ)∥xt+1−xt∥2 =μ2(1−μ)(∥xt+1−x∗∥2+∥xt−x∗∥2+ 2|⟨xt+1−x∗,xt−x∗⟩|) ≤μ1−μ(∥xt+1−x∗∥2+∥xt−x∗∥2).

Similarly,

 |⟨u∗−ut,v∗−vt⟩| ≤μ∥u∗−ut∥∥v∗−vt∥ ≤μ2(∥u∗−ut∥2+∥v∗−vt∥) +μ|⟨u∗−ut,v∗−vt⟩|,

which gives:

 |⟨u∗−ut,v∗−vt⟩|≤μ2(1−μ)∥x∗−xt∥2.

Combining, we get:

 T2

Moreover, by invoking RSC and Cauchy-Schwartz (similar to the proof of Theorem 1), we have:

 ∥x∗−xt∥2 ≤1α(F(xt)−F(x∗))+O(ε), ∥x∗−xt+1∥2 ≤1α(F(xt+1)−F(x∗))+O(ε).

Therefore we obtain the upper bound on :

 T2 ≤3βμ2α(1−μ)(F(xt)−F(x∗)) +βμ2α(1−μ)(F(xt+1)−F(x∗))+C′ε.

Plugging in the upper bounds on and and re-arranging terms, we get:

 (1−βμ2α(1−μ))(F(xt+1)−F(x∗)) ≤(2−βα+3βμ2α(1−μ))(F(xt)−F(x∗))+C′ε,

which leads to the desired result.

## Iv Discussion

We provide some concluding remarks and potential directions for future work.

While our contributions in this paper are primarily theoretical, in recently published work  we have explored the practical benefits of our approach in the context of linear inverse problems such as compressive sensing. However, our algorithms proposed in this paper are generic, and can be used to solve a variety of nonlinear inverse problems. In future work, we will explore the empirical benefits for nonlinear settings, and also test the efficacy of our myopic PGD algorithm for handling model mismatch.

The main algorithmic message of this paper is to show that solving a variety of nonlinear inverse problems using a generative network model can be reduced to performing a set of -projections onto the range of the network model. This can be challenging in general; for most interesting generative networks, this itself is a nonconvex problem, and potentially hard. However, recent work by [38, 39] have studied special cases where this type of projection can be tractable; in particular, for certain neural networks satisfying certain randomness conditions, one can solve the projection problem using a variation of gradient descent (which is more or less what all approaches employ in practice). Studying the landscape of such projection problems is an interesting direction of future research.

We make several assumptions to enable our analysis. Some of them (for example, restricted strong convexity/smoothness; incoherence) are standard analysis tools and are common in the high-dimensional statistics and compressive sensing literature. However, in order to be applicable, they need to be verified for specific problems. A broader characterization of problems that

do satisfy these assumptions will be of great interest.

## Acknowledgments

This project was supported in part by grants CAREER CCF-1750920 and CCF-1815101, a faculty fellowship from the Black and Veatch Foundation, and an equipment donation from the NVIDIA Corporation. The author would like to thank Ludwig Schmidt and Viraj Shah for helpful discussions.

## References

•  E. Candès et al. Compressive sampling. In Proc. of the intl. congress of math., volume 3, pages 1433–1452. Madrid, Spain, 2006.
•  R. Baraniuk, V. Cevher, M. Duarte, and C. Hegde. Model-based compressive sensing. IEEE Trans. Inform. Theory, 56(4):1982–2001, Apr. 2010.
•  C. Hegde, P. Indyk, and L. Schmidt. Fast algorithms for structured sparsity. Bulletin of the EATCS, 1(117):197–228, Oct. 2015.
•  M. Duarte, C. Hegde, V. Cevher, and R. Baraniuk. Recovery of compressible signals from unions of subspaces. In Proc. IEEE Conf. Inform. Science and Systems (CISS), March 2009.
•  M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Processing, 15(12):3736–3745, 2006.
•  M. Aharon, M. Elad, and A. Bruckstein. -svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Processing, 54(11):4311–4322, 2006.
•  T. Chan, J. Shen, and H. Zhou. Total variation wavelet inpainting. Jour. of Math. imaging and Vision, 25(1):107–125, 2006.
•  S. Ravishankar and Y. Bresler. Learning sparsifying transforms. IEEE Trans. Signal Processing, 61(5):1072–1086, 2013.
•  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proc. Adv. in Neural Processing Systems (NIPS), pages 2672–2680, 2014.
•  D. Berthelot, T. Schumm, and L. Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
•  R. Yeh, C. Chen, T. Lim, M. Hasegawa-Johnson, and M. Do. Semantic image inpainting with perceptual and contextual losses. arXiv preprint arXiv:1607.07539, 2016.
•  C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. Proc. IEEE Conf. Comp. Vision and Pattern Recog. (CVPR), pages 105–114, 2017.
•  A. Bora, A. Jalal, E. Price, and A. Dimakis. Compressed sensing using generative models.

Proc. Int. Conf. Machine Learning

, 2017.
•  V. Shah and C. Hegde. Solving linear inverse problems using gan priors: An algorithm with provable guarantees. In Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP), Apr. 2018.
•  M. Dhar, A. Grover, and S. Ermon. Modeling sparse deviations for compressed sensing using generative models. In Proc. Int. Conf. Machine Learning, 2018.
•  C. Hegde and R. Baraniuk. SPIN: Iterative signal recovery on incoherent manifolds. In Proc. IEEE Int. Symp. Inform. Theory (ISIT), July 2012.
•  C. Hegde and R. Baraniuk. Signal recovery on incoherent manifolds. IEEE Trans. Inform. Theory, 58(12):7204–7214, Dec. 2012.
•  G. Raskutti, M. J Wainwright, and B. Yu.

Restricted eigenvalue properties for correlated gaussian designs.

J. Machine Learning Research, 11(Aug):2241–2259, 2010.
•  Prateek Jain and Purushottam Kar. Non-convex optimization for machine learning. Foundations and Trends® in Machine Learning, 10(3-4):142–336, 2017.
•  M. McCoy and J. Tropp. Sharp recovery bounds for convex demixing, with applications. Foundations of Comp. Math., 14(3):503–567, 2014.
•  M. Soltani and C. Hegde. Fast algorithms for demixing signals from nonlinear observations. IEEE Trans. Sig. Proc., 65(16):4209–4222, Aug. 2017.
•  D. Donoho. De-noising by soft-thresholding. IEEE Trans. Inform. Theory, 41(3):613–627, 1995.
•  Z. Xu and J. Sun. Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Processing, 19(5):1153–1165, 2010.
•  W. Dong, L. Zhang, G. Shi, and X. Wu. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Processing, 20(7):1838–1857, 2011.
•  Y. LeCun, Y. Bengio, and G. Hinton. Nature, 521(7553):436–444, 2015.
•  K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, and A. Ashok. Reconnet: Non-iterative reconstruction of images from compressively sensed measurements. In Proc. IEEE Conf. Comp. Vision and Pattern Recog. (CVPR), pages 449–458, 2016.
•  A. Mousavi, A. Patel, and R. Baraniuk. A deep learning approach to structured signal recovery. In Proc. Allerton Conf. Communication, Control, and Computing, pages 1336–1343, 2015.
•  A. Mousavi and R. Baraniuk. Learning to invert: Signal recovery via deep convolutional networks. Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP), 2017.
•  L. Xu, J. Ren, C. Liu, and J. Jia.

Deep convolutional neural network for image deconvolution.

In Proc. Adv. in Neural Processing Systems (NIPS), pages 1790–1798, 2014.
•  C. Dong, C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Machine Intell., 38(2):295–307, 2016.
•  J. Kim, J. Kwon Lee, and K. Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proc. IEEE Conf. Comp. Vision and Pattern Recog. (CVPR), pages 1646–1654, 2016.
•  R. Yeh, C. Chen, T.-Y. Lim, A. Schwing, M. Hasegawa-Johnson, and M. Do. Semantic image inpainting with deep generative models. In Proc. IEEE Conf. Comp. Vision and Pattern Recog. (CVPR), volume 2, page 4, 2017.
•  J. Rick Chang, C. Li, B. Poczos, B. Vijaya Kumar, and A. Sankaranarayanan. One network to solve them all–solving linear inverse problems using deep projection models. In Proc. IEEE Conf. Comp. Vision and Pattern Recog. (CVPR), pages 5888–5897, 2017.
•  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
•  J.-Y. Zhu, T. Park, P. Isola, and A. Efros.

Unpaired image-to-image translation using cycle-consistent adversarial networks.

In Proc. IEEE Conf. Comp. Vision and Pattern Recog. (CVPR), 2017.
•  A. Brock, J. Donahue, and K. Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
•  D. Van Veen, A. Jalal, E. Price, S. Vishwanath, and A. Dimakis. Compressed sensing with deep image prior and learned regularization. arXiv preprint arXiv:1806.06438, 2018.
•  P. Hand and V. Voroninski. Global guarantees for enforcing deep generative priors by empirical risk. arXiv preprint arXiv:1705.07576, 2017.
•  R. Heckel, W. Huang, P. Hand, and V. Voroninski. Deep denoising: Rate-optimal recovery of structured signals with a deep prior. arXiv preprint arXiv:1805.08855, 2018.