Quantified advantage of discontinuous weight selection in approximations with deep neural networks

05/03/2017
by   Dmitry Yarotsky, et al.
0

We consider approximations of 1D Lipschitz functions by deep ReLU networks of a fixed width. We prove that without the assumption of continuous weight selection the uniform approximation error is lower than with this assumption at least by a factor logarithmic in the size of the network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2023

Uniform Convergence of Deep Neural Networks with Lipschitz Continuous Activation Functions and Variable Widths

We consider deep neural networks with a Lipschitz continuous activation ...
research
10/03/2016

Error bounds for approximations with deep ReLU networks

We study expressive power of shallow and deep neural networks with piece...
research
09/09/2019

Optimal Function Approximation with Relu Neural Networks

We consider in this paper the optimal approximations of convex univariat...
research
01/29/2023

On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

This paper studies the expressive power of deep neural networks from the...
research
09/09/2019

PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units

Deep neural network with rectified linear units (ReLU) is getting more a...
research
08/29/2018

Symbolic regression based genetic approximations of the Colebrook equation for flow friction

Widely used in hydraulics, the Colebrook equation for flow friction rela...

Please sign up or login with your details

Forgot password? Click here to reset