Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations

06/20/2022
by   Denis Belomestny, et al.
0

This paper investigates the approximation properties of deep neural networks with piecewise-polynomial activation functions. We derive the required depth, width, and sparsity of a deep neural network to approximate any Hölder smooth function up to a given approximation error in Hölder norms in such a way that all weights of this neural network are bounded by 1. The latter feature is essential to control generalization errors in many statistical and machine learning applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2019

Smooth function approximation by deep neural networks with general activation functions

There has been a growing interest in expressivity of deep neural network...
research
04/05/2021

Deep neural network approximation of analytic functions

We provide an entropy bound for the spaces of neural networks with piece...
research
01/27/2021

Partition of unity networks: deep hp-approximation

Approximation theorists have established best-in-class optimal approxima...
research
09/24/2020

Theoretical Analysis of the Advantage of Deepening Neural Networks

We propose two new criteria to understand the advantage of deepening neu...
research
08/22/2018

An Explicit Neural Network Construction for Piecewise Constant Function Approximation

We present an explicit construction for feedforward neural network (FNN)...
research
10/14/2021

Training Neural Networks for Solving 1-D Optimal Piecewise Linear Approximation

Recently, the interpretability of deep learning has attracted a lot of a...
research
05/15/2015

Discontinuous Piecewise Polynomial Neural Networks

An artificial neural network is presented based on the idea of connectio...

Please sign up or login with your details

Forgot password? Click here to reset