Bounds on the Approximation Power of Feedforward Neural Networks

06/29/2018
by   Mohammad Mehrabi, et al.
0

The approximation power of general feedforward neural networks with piecewise linear activation functions is investigated. First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon state-of-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2020

On Sharpness of Error Bounds for Multivariate Neural Network Approximation

Sharpness of error bounds for best non-linear multivariate approximation...
research
12/12/2015

The Power of Depth for Feedforward Neural Networks

We show that there is a simple (approximately radial) function on ^d, ex...
research
01/12/2023

On the explainability of quantum neural networks based on variational quantum circuits

Ridge functions are used to describe and study the lower bound of the ap...
research
06/17/2022

The Role of Depth, Width, and Activation Complexity in the Number of Linear Regions of Neural Networks

Many feedforward neural networks generate continuous and piecewise-linea...
research
08/20/2017

A Capacity Scaling Law for Artificial Neural Networks

By assuming an ideal neural network with gating functions handling the w...
research
07/08/2019

Copula Representations and Error Surface Projections for the Exclusive Or Problem

The exclusive or (xor) function is one of the simplest examples that ill...
research
10/06/2003

On Interference of Signals and Generalization in Feedforward Neural Networks

This paper studies how the generalization ability of neurons can be affe...

Please sign up or login with your details

Forgot password? Click here to reset