How Many Neurons Does it Take to Approximate the Maximum?

07/18/2023
βˆ™
by   Itay Safran, et al.
βˆ™
0
βˆ™

We study the size of a neural network needed to approximate the maximum function over d inputs, in the most basic setting of approximating with respect to the L_2 norm, for continuous distributions, for a network that uses ReLU activations. We provide new lower and upper bounds on the width required for approximation across various depths. Our results establish new depth separations between depth 2 and 3, and depth 3 and 5 networks, as well as providing a depth π’ͺ(log(log(d))) and width π’ͺ(d) construction which approximates the maximum function, significantly improving upon the depth requirements of the best previously known bounds for networks with linearly-bounded width. Our depth separation results are facilitated by a new lower bound for depth 2 networks approximating the maximum function over the uniform distribution, assuming an exponential upper bound on the size of the weights. Furthermore, we are able to use this depth 2 lower bound to provide tight bounds on the number of neurons needed to approximate the maximum by a depth 3 network. Our lower bounds are of potentially broad interest as they apply to the widely studied and used max function, in contrast to many previous results that base their bounds on specially constructed or pathological functions and distributions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
βˆ™ 01/25/2021

Approximating Probability Distributions by ReLU Networks

How many neurons are needed to approximate a target probability distribu...
research
βˆ™ 07/10/2023

Self Expanding Neural Networks

The results of training a neural network are heavily dependent on the ar...
research
βˆ™ 08/29/2011

Learning Valuation Functions

In this paper we study the approximate learnability of valuations common...
research
βˆ™ 07/13/2020

Probabilistic bounds on data sensitivity in deep rectifier networks

Neuron death is a complex phenomenon with implications for model trainab...
research
βˆ™ 01/28/2021

Information contraction in noisy binary neural networks and its implications

Neural networks have gained importance as the machine learning models th...
research
βˆ™ 05/11/2023

Rethink Depth Separation with Intra-layer Links

The depth separation theory is nowadays widely accepted as an effective ...
research
βˆ™ 09/19/2022

Deep Linear Networks can Benignly Overfit when Shallow Ones Do

We bound the excess risk of interpolating deep linear networks trained u...

Please sign up or login with your details

Forgot password? Click here to reset