Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent

06/25/2021
by   Spencer Frei, et al.
7

Although the optimization objectives for learning neural networks are highly non-convex, gradient-based methods have been wildly successful at learning neural networks in practice. This juxtaposition has led to a number of recent studies on provable guarantees for neural networks trained by gradient descent. Unfortunately, the techniques in these works are often highly specific to the problem studied in each setting, relying on different assumptions on the distribution, optimization parameters, and network architectures, making it difficult to generalize across different settings. In this work, we propose a unified non-convex optimization framework for the analysis of neural network training. We introduce the notions of proxy convexity and proxy Polyak-Lojasiewicz (PL) inequalities, which are satisfied if the original objective function induces a proxy objective function that is implicitly minimized when using gradient methods. We show that stochastic gradient descent (SGD) on objectives satisfying proxy convexity or the proxy PL inequality leads to efficient guarantees for proxy objective functions. We further show that many existing guarantees for neural networks trained by gradient descent can be unified through proxy convexity and proxy PL inequalities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2016

Mollifying Networks

The optimization of deep neural networks can be more challenging than tr...
research
10/04/2021

Global Convergence and Stability of Stochastic Gradient Descent

In machine learning, stochastic gradient descent (SGD) is widely deploye...
research
08/12/2015

On the Convergence of SGD Training of Neural Networks

Neural networks are usually trained by some form of stochastic gradient ...
research
02/07/2023

Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy

We present an algorithm for minimizing an objective with hard-to-compute...
research
04/14/2022

RankNEAT: Outperforming Stochastic Gradient Search in Preference Learning Tasks

Stochastic gradient descent (SGD) is a premium optimization method for t...
research
06/11/2022

Parameter Convex Neural Networks

Deep learning utilizing deep neural networks (DNNs) has achieved a lot o...
research
11/21/2014

On the Impossibility of Convex Inference in Human Computation

Human computation or crowdsourcing involves joint inference of the groun...

Please sign up or login with your details

Forgot password? Click here to reset