Approximation in L^p(μ) with deep ReLU neural networks

04/09/2019
by   Felix Voigtlaender, et al.
0

We discuss the expressive power of neural networks which use the non-smooth ReLU activation function ϱ(x) = {0,x} by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth--- which up to now only consider approximation in L^p(λ) for the Lebesgue measure λ--- can be generalized to approximation in L^p(μ), for any finite Borel measure μ. In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in L^2(P), with the probability measure P describing the distribution of the data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2022

Expressive power of binary and ternary neural networks

We show that deep sparse ReLU networks with ternary weights and deep ReL...
research
04/10/2023

Approximation of Nonlinear Functionals Using Deep ReLU Networks

In recent years, functional neural networks have been proposed and studi...
research
09/26/2018

Rediscovering Deep Neural Networks in Finite-State Distributions

We propose a new way of thinking about deep neural networks, in which th...
research
03/30/2020

Kernel based analysis of massive data

Dealing with massive data is a challenging task for machine learning. An...
research
02/28/2021

Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality on Hölder Class

In this paper, we construct neural networks with ReLU, sine and 2^x as a...
research
05/10/2021

ReLU Deep Neural Networks from the Hierarchical Basis Perspective

We study ReLU deep neural networks (DNNs) by investigating their connect...
research
03/18/2019

On-line learning dynamics of ReLU neural networks using statistical physics techniques

We introduce exact macroscopic on-line learning dynamics of two-layer ne...

Please sign up or login with your details

Forgot password? Click here to reset