Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization

08/14/2018
by   Gang Wang, et al.
6

Neural networks with ReLU activations have achieved great empirical success in various domains. However, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being e.g. Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise, this contribution presents a novel stochastic gradient descent (SGD) algorithm, which can provably train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower bound on the number of effective (non-zero) updates to be performed. Furthermore, generalization guarantees are developed for ReLU networks trained with the novel SGD. These guarantees highlight a fundamental difference (at least in the worst case) between learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.

READ FULL TEXT

page 12

page 13

research
06/12/2018

Convergence of SGD in Learning ReLU Models with Separable Data

We consider the binary classification problem in which the objective fun...
research
01/04/2021

Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise

We consider a one-hidden-layer leaky ReLU network of arbitrary width tra...
research
06/30/2023

The Implicit Bias of Minima Stability in Multivariate Shallow ReLU Networks

We study the type of solutions to which stochastic gradient descent conv...
research
05/08/2020

A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods

In this work we demonstrate provable guarantees on the training of depth...
research
09/28/2018

Efficiently testing local optimality and escaping saddles for ReLU networks

We provide a theoretical algorithm for checking local optimality and esc...
research
06/20/2018

Learning ReLU Networks via Alternating Minimization

We propose and analyze a new family of algorithms for training neural ne...
research
03/03/2023

Learning High-Dimensional Single-Neuron ReLU Networks with Finite Samples

This paper considers the problem of learning a single ReLU neuron with s...

Please sign up or login with your details

Forgot password? Click here to reset