Global Guarantees for Blind Demodulation with Generative Priors

05/29/2019
by   Paul Hand, et al.
0

We study a deep learning inspired formulation for the blind demodulation problem, which is the task of recovering two unknown vectors from their entrywise multiplication. We consider the case where the unknown vectors are in the range of known deep generative models, G^(1):R^n→R^ℓ and G^(2):R^p→R^ℓ. In the case when the networks corresponding to the generative models are expansive, the weight matrices are random and the dimension of the unknown vectors satisfy ℓ = Ω(n^2+p^2), up to log factors, we show that the empirical risk objective has a favorable landscape for optimization. That is, the objective function has a descent direction at every point outside of a small neighborhood around four hyperbolic curves. We also characterize the local maximizers of the empirical risk objective and, hence, show that there does not exist any other stationary points outside of these neighborhood around four hyperbolic curves and the set of local maximizers. We also implement a gradient descent scheme inspired by the geometry of the landscape of the objective function. In order to converge to a global minimizer, this gradient descent scheme exploits the fact that exactly one of the hyperbolic curve corresponds to the global minimizer, and thus points near this hyperbolic curve have a lower objective value than points close to the other spurious hyperbolic curves. We show that this gradient descent scheme can effectively remove distortions synthetically introduced to the MNIST dataset.

READ FULL TEXT
research
01/07/2022

Local and Global Convergence of General Burer-Monteiro Tensor Optimizations

Tensor optimization is crucial to massive machine learning and signal pr...
research
08/20/2019

Blind Image Deconvolution using Pretrained Generative Priors

This paper proposes a novel approach to regularize the ill-posed blind i...
research
12/03/2019

Stationary Points of Shallow Neural Networks with Quadratic Activation Function

We consider the problem of learning shallow neural networks with quadrat...
research
07/11/2018

Phase Retrieval Under a Generative Prior

The phase retrieval problem asks to recover a natural signal y_0 ∈R^n fr...
research
04/23/2021

Learning phylogenetic trees as hyperbolic point configurations

An alternative to independent pairwise distance estimation is proposed t...
research
10/21/2019

Adaptive gradient descent without descent

We present a strikingly simple proof that two rules are sufficient to au...
research
02/18/2018

Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability

We study the detailed path-wise behavior of the discrete-time Langevin a...

Please sign up or login with your details

Forgot password? Click here to reset