Tighter Sparse Approximation Bounds for ReLU Neural Networks

10/07/2021
by   Carles Domingo Enrich, et al.
0

A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball ℬ_R(ℝ^d) up to error ϵ, when the Fourier based quantity C_f = 1/(2π)^d/2∫_ℝ^dξ^2 |f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based ℛ-norms and show that a function defined on ℝ^d can be represented as an infinite-width two-layer neural network if and only if its ℛ-norm is finite. In this work, we extend the framework of Ongie et al. (2019) and define similar Radon-based semi-norms (ℛ, 𝒰-norms) such that a function admits an infinite-width neural network representation on a bounded open set 𝒰⊆ℝ^d when its ℛ, 𝒰-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2023

Piecewise Linear Functions Representable with Infinite Width Shallow ReLU Neural Networks

This paper analyzes representations of continuous piecewise linear funct...
research
06/05/2018

The universal approximation power of finite-width deep ReLU networks

We show that finite-width deep ReLU neural networks yield rate-distortio...
research
10/03/2019

A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case

A key element of understanding the efficacy of overparameterized neural ...
research
12/27/2021

Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators

We construct pairs of distributions μ_d, ν_d on ℝ^d such that the quanti...
research
02/13/2019

How do infinite width bounded norm networks look in function space?

We consider the question of what functions can be captured by ReLU netwo...
research
06/04/2021

Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes

This work studies the (non)robustness of two-layer neural networks in va...
research
12/06/2019

A priori generalization error for two-layer ReLU neural network through minimum norm solution

We focus on estimating a priori generalization error of two-layer ReLU n...

Please sign up or login with your details

Forgot password? Click here to reset