
A Generalization of Wirtinger Flow for Exact Interferometric Inversion
Interferometric inversion involves recovery of a signal from crosscorre...
01/13/2019 ∙ by Bariscan Yonel, et al. ∙ 0 ∙ shareread it

Stochastic seismic waveform inversion using generative adversarial networks as a geological prior
We present an application of deep generative models in the context of pa...
06/10/2018 ∙ by Lukas Mosser, et al. ∙ 0 ∙ shareread it

Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
Deep learning models are often successfully trained using gradient desce...
02/26/2017 ∙ by Alon Brutzkus, et al. ∙ 0 ∙ shareread it

Uniform Convergence of Gradients for NonConvex Learning and Optimization
We investigate 1) the rate at which refined properties of the empirical ...
10/25/2018 ∙ by Dylan J. Foster, et al. ∙ 0 ∙ shareread it

A Factorial Mixture Prior for Compositional Deep Generative Models
We assume that a highdimensional datum, like an image, is a composition...
12/18/2018 ∙ by Ulrich Paquet, et al. ∙ 26 ∙ shareread it

Fast Algorithms for Robust PCA via Gradient Descent
We consider the problem of Robust PCA in the fully and partially observe...
05/25/2016 ∙ by Xinyang Yi, et al. ∙ 0 ∙ shareread it

Fast Approximate Geodesics for Deep Generative Models
The length of the geodesic between two data points along the Riemannian ...
12/19/2018 ∙ by Nutan Chen, et al. ∙ 10 ∙ shareread it
Inverting Deep Generative models, One layer at a time
We study the problem of inverting a deep generative model with ReLU activations. Inversion corresponds to finding a latent code vector that explains observed measurements as much as possible. In most prior works this is performed by attempting to solve a nonconvex optimization problem involving the generator. In this paper we obtain several novel theoretical results for the inversion problem. We show that for the realizable case, single layer inversion can be performed exactly in polynomial time, by solving a linear program. Further, we show that for multiple layers, inversion is NPhard and the preimage set can be nonconvex. For generative models of arbitrary depth, we show that exact recovery is possible in polynomial time with high probability, if the layers are expanding and the weights are randomly selected. Very recent work analyzed the same problem for gradient descent inversion. Their analysis requires significantly higher expansion (logarithmic in the latent dimension) while our proposed algorithm can provably reconstruct even with constant factor expansion. We also provide provable error bounds for different norms for reconstructing noisy observations. Our empirical validation demonstrates that we obtain better reconstructions when the latent dimension is large.
READ FULL TEXT