Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval

06/12/2020
by   Stefano Sarao Mannelli, et al.
0

Despite the widespread use of gradient-based algorithms for optimizing high-dimensional non-convex functions, understanding their ability of finding good minima instead of being trapped in spurious ones remains to a large extent an open problem. Here we focus on gradient flow dynamics for phase retrieval from random measurements. When the ratio of the number of measurements over the input dimension is small the dynamics remains trapped in spurious minima with large basins of attraction. We find analytically that above a critical ratio those critical points become unstable developing a negative direction toward the signal. By numerical experiments we show that in this regime the gradient flow algorithm is not trapped; it drifts away from the spurious critical points along the unstable direction and succeeds in finding the global minimum. Using tools from statistical physics we characterize this phenomenon, which is related to a BBP-type transition in the Hessian of the spurious minima.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/18/2019

Who is Afraid of Big Bad Minima? Analysis of Gradient-Flow in a Spiked Matrix-Tensor Model

Gradient-based algorithms are effective for many machine learning tasks,...
research
04/08/2021

Numerics and analysis of Cahn–Hilliard critical points

We explore recent progress and open questions concerning local minima an...
research
03/23/2020

Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses

Despite the fact that the loss functions of deep neural networks are hig...
research
07/28/2021

Global minimizers, strict and non-strict saddle points, and implicit regularization for deep linear neural networks

In non-convex settings, it is established that the behavior of gradient-...
research
10/04/2022

The Dynamics of Sharpness-Aware Minimization: Bouncing Across Ravines and Drifting Towards Wide Minima

We consider Sharpness-Aware Minimization (SAM), a gradient-based optimiz...
research
06/13/2023

Symmetry Critical Points for Symmetric Tensor Decomposition Problems

We consider the non-convex optimization problem associated with the deco...
research
07/20/2021

Learn2Hop: Learned Optimization on Rough Landscapes

Optimization of non-convex loss surfaces containing many local minima re...

Please sign up or login with your details

Forgot password? Click here to reset