A simple geometric proof for the benefit of depth in ReLU networks

01/18/2021
by   Asaf Amrami, et al.
0

We present a simple proof for the benefit of depth in multi-layer feedforward network with rectified activation ("depth separation"). Specifically we present a sequence of classification problems indexed by m such that (a) for any fixed depth rectified network there exist an m above which classifying problem m correctly requires exponential number of parameters (in m); and (b) for any problem in the sequence, we present a concrete neural network with linear depth (in m) and small constant width (≤ 4) that classifies the problem with zero error. The constructive proof is based on geometric arguments and a space folding construction. While stronger bounds and results exist, our proof uses substantially simpler tools and techniques, and should be accessible to undergraduate students in computer science and people with similar backgrounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/12/2015

The Power of Depth for Feedforward Neural Networks

We show that there is a simple (approximately radial) function on ^d, ex...
research
06/23/2021

Adversarial Examples in Multi-Layer Random ReLU Networks

We consider the phenomenon of adversarial examples in ReLU networks with...
research
05/31/2020

Neural Networks with Small Weights and Depth-Separation Barriers

In studying the expressiveness of neural networks, an important question...
research
02/08/2022

Width is Less Important than Depth in ReLU Neural Networks

We solve an open question from Lu et al. (2017), by showing that any tar...
research
03/02/2020

Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems

The expressivity of neural networks as a function of their depth, width ...
research
09/27/2015

Representation Benefits of Deep Feedforward Networks

This note provides a family of classification problems, indexed by a pos...

Please sign up or login with your details

Forgot password? Click here to reset