Optimal Sample Complexity of Gradient Descent for Amplitude Flow via Non-Lipschitz Matrix Concentration

10/31/2020
by   Paul Hand, et al.
0

We consider the problem of recovering a real-valued n-dimensional signal from m phaseless, linear measurements and analyze the amplitude-based non-smooth least squares objective. We establish local convergence of gradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator arising from the objective's gradient dynamics. While common techniques to establish uniform concentration of random functions exploit Lipschitz continuity, we prove that the discontinuous matrix-valued operator satisfies a uniform matrix concentration inequality when the measurement vectors are Gaussian as soon as m = Ω(n) with high probability. We then show that satisfaction of this inequality is sufficient for gradient descent with proper initialization to converge linearly to the true solution up to the global sign ambiguity. As a consequence, this guarantees local convergence for Gaussian measurements at optimal sample complexity. The concentration methods in the present work have previously been used to establish recovery guarantees for a variety of inverse problems under generative neural network priors. This paper demonstrates the applicability of these techniques to more traditional inverse problems and serves as a pedagogical introduction to those results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2022

Provable Phase Retrieval with Mirror Descent

In this paper, we consider the problem of phase retrieval, which consist...
research
05/11/2023

Convergence of Alternating Gradient Descent for Matrix Factorization

We consider alternating gradient descent (AGD) with fixed step size η > ...
research
09/21/2022

Projected Gradient Descent Algorithms for Solving Nonlinear Inverse Problems with Generative Priors

In this paper, we propose projected gradient descent (PGD) algorithms fo...
research
06/04/2018

Solving Systems of Quadratic Equations via Exponential-type Gradient Descent Algorithm

We consider the rank minimization problem from quadratic measurements, i...
research
02/17/2018

Nonconvex Matrix Factorization from Rank-One Measurements

We consider the problem of recovering low-rank matrices from random rank...
research
09/28/2018

Fast state tomography with optimal error bounds

Projected least squares (PLS) is an intuitive and numerically cheap tech...
research
03/20/2023

Convergence Guarantees of Overparametrized Wide Deep Inverse Prior

Neural networks have become a prominent approach to solve inverse proble...

Please sign up or login with your details

Forgot password? Click here to reset