Over-parametrized neural networks as under-determined linear systems

10/29/2020
by   Austin R. Benson, et al.
0

We draw connections between simple neural networks and under-determined linear systems to comprehensively explore several interesting theoretical questions in the study of neural networks. First, we emphatically show that it is unsurprising such networks can achieve zero training loss. More specifically, we provide lower bounds on the width of a single hidden layer neural network such that only training the last linear layer suffices to reach zero training loss. Our lower bounds grow more slowly with data set size than existing work that trains the hidden layer weights. Second, we show that kernels typically associated with the ReLU activation function have fundamental flaws – there are simple data sets where it is impossible for widely studied bias-free models to achieve zero training loss irrespective of how the parameters are chosen or trained. Lastly, our analysis of gradient descent clearly illustrates how spectral properties of certain matrices impact both the early iteration and long-term training behavior. We propose new activation functions that avoid the pitfalls of ReLU in that they admit zero training loss solutions for any set of distinct data points and experimentally exhibit favorable spectral properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2020

A ReLU Dense Layer to Improve the Performance of Neural Networks

We propose ReDense as a simple and low complexity way to improve the per...
research
04/19/2019

Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process

We consider deep networks, trained via stochastic gradient descent to mi...
research
01/28/2022

Improved Overparametrization Bounds for Global Convergence of Stochastic Gradient Descent for Shallow Neural Networks

We study the overparametrization bounds required for the global converge...
research
07/13/2020

Probabilistic bounds on data sensitivity in deep rectifier networks

Neuron death is a complex phenomenon with implications for model trainab...
research
08/10/2021

Linear approximability of two-layer neural networks: A comprehensive analysis based on spectral decay

In this paper, we present a spectral-based approach to study the linear ...
research
09/14/2016

Understanding Convolutional Neural Networks with A Mathematical Model

This work attempts to address two fundamental questions about the struct...
research
07/14/2017

On the Complexity of Learning Neural Networks

The stunning empirical successes of neural networks currently lack rigor...

Please sign up or login with your details

Forgot password? Click here to reset