Efficiently testing local optimality and escaping saddles for ReLU networks

09/28/2018
by   Chulhee Yun, et al.
18

We provide a theoretical algorithm for checking local optimality and escaping saddles at nondifferentiable points of empirical risks of two-layer ReLU networks. Our algorithm receives any parameter value and returns: local minimum, second-order stationary point, or a strict descent direction. The presence of M data points on the nondifferentiability of the ReLU divides the parameter space into at most 2^M regions, which makes analysis difficult. By exploiting polyhedral geometry, we reduce the total computation down to one convex quadratic program (QP) for each hidden node, O(M) (in)equality tests, and one (or a few) nonconvex QP. For the last QP, we show that our specific problem can be solved efficiently, in spite of nonconvexity. In the benign case, we solve one equality constrained QP, and we prove that projected gradient descent solves it exponentially fast. In the bad case, we have to solve a few more inequality constrained QPs, but we prove that the time complexity is exponential only in the number of inequality constraints. Our experiments show that either benign case or bad case with very few inequality constraints occurs, implying that our algorithm is efficient in most cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2020

Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent

We prove that two-layer (Leaky)ReLU networks initialized by e.g. the wid...
research
08/14/2018

Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization

Neural networks with ReLU activations have achieved great empirical succ...
research
05/27/2022

HOUDINI: Escaping from Moderately Constrained Saddles

We give the first polynomial time algorithms for escaping from high-dime...
research
06/15/2020

Understanding Global Loss Landscape of One-hidden-layer ReLU Networks, Part 2: Experiments and Analysis

The existence of local minima for one-hidden-layer ReLU networks has bee...
research
06/24/2023

Towards Understanding Gradient Approximation in Equality Constrained Deep Declarative Networks

We explore conditions for when the gradient of a deep declarative node c...
research
06/06/2021

Complexity Analysis of Stein Variational Gradient Descent Under Talagrand's Inequality T1

We study the complexity of Stein Variational Gradient Descent (SVGD), wh...
research
05/08/2020

Project and Forget: Solving Large-Scale Metric Constrained Problems

Given a set of dissimilarity measurements amongst data points, determini...

Please sign up or login with your details

Forgot password? Click here to reset