A Corrective View of Neural Networks: Representation, Memorization and Learning

02/01/2020
by   Guy Bresler, et al.
0

We develop a corrective mechanism for neural network approximation: the total available non-linear units are divided into multiple groups and the first group approximates the function under consideration, the second group approximates the error in approximation produced by the first group and corrects it, the third group approximates the error produced by the first and second groups together and so on. This technique yields several new representation and learning results for neural networks. First, we show that two-layer neural networks in the random features regime (RF) can memorize arbitrary labels for arbitrary points under under Euclidean distance separation condition using Õ(n) ReLU or Step activation functions which is optimal in n up to logarithmic factors. Next, we give a powerful representation result for two-layer neural networks with ReLU and smoothed ReLU units which can achieve a squared error of at most ϵ with O(C(a,d)ϵ^-1/(a+1)) for a ∈N∪{0} when the function is smooth enough (roughly when it has Θ(ad) bounded derivatives). In certain cases d can be replaced with effective dimension q ≪ d. Previous results of this type implement Taylor series approximation using deep architectures. We also consider three-layer neural networks and show that the corrective mechanism yields faster representation rates for smooth radial functions. Lastly, we obtain the first O(subpoly(1/ϵ)) upper bound on the number of neurons required for a two layer network to learn low degree polynomials up to squared error ϵ via gradient descent. Even though deep networks can express these polynomials with O(polylog(1/ϵ)) neurons, the best learning bounds on this problem require poly(1/ϵ) neurons.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2016

Why Deep Neural Networks for Function Approximation?

Recently there has been much interest in understanding why deep neural n...
research
09/09/2019

PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units

Deep neural network with rectified linear units (ReLU) is getting more a...
research
06/05/2018

The universal approximation power of finite-width deep ReLU networks

We show that finite-width deep ReLU neural networks yield rate-distortio...
research
06/07/2020

Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth

We prove sharp dimension-free representation results for neural networks...
research
08/10/2021

Linear approximability of two-layer neural networks: A comprehensive analysis based on spectral decay

In this paper, we present a spectral-based approach to study the linear ...
research
12/10/2020

The Representation Power of Neural Networks: Breaking the Curse of Dimensionality

In this paper, we analyze the number of neurons and training parameters ...
research
08/03/2023

Memory capacity of two layer neural networks with smooth activations

Determining the memory capacity of two-layer neural networks with m hidd...

Please sign up or login with your details

Forgot password? Click here to reset