Representation Benefits of Deep Feedforward Networks

09/27/2015
by   Matus Telgarsky, et al.
0

This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/6, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2016

Benefits of depth in neural networks

For any positive integer k, there exist neural networks with Θ(k^3) laye...
research
08/29/2022

Rosenblatt's first theorem and frugality of deep learning

First Rosenblatt's theorem about omnipotence of shallow networks states ...
research
08/28/2023

Fast Feedforward Networks

We break the linear link between the layer size and its inference cost b...
research
12/18/2019

Tangent Space Separability in Feedforward Neural Networks

Hierarchical neural networks are exponentially more efficient than their...
research
05/18/2018

Tropical Geometry of Deep Neural Networks

We establish, for the first time, connections between feedforward neural...
research
01/22/2018

Binary output layer of feedforward neural networks for solving multi-class classification problems

Considered in this short note is the design of output layer nodes of fee...
research
01/18/2021

A simple geometric proof for the benefit of depth in ReLU networks

We present a simple proof for the benefit of depth in multi-layer feedfo...

Please sign up or login with your details

Forgot password? Click here to reset