Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality on Hölder Class

02/28/2021
by   Yuling Jiao, et al.
0

In this paper, we construct neural networks with ReLU, sine and 2^x as activation functions. For general continuous f defined on [0,1]^d with continuity modulus ω_f(·), we construct ReLU-sine-2^x networks that enjoy an approximation rate 𝒪(ω_f(√(d))·2^-M+ω_f(√(d)/N)), where M,N∈ℕ^+ denote the hyperparameters related to widths of the networks. As a consequence, we can construct ReLU-sine-2^x network with the depth 5 and width max{⌈2d^3/2(3μ/ϵ)^1/α⌉,2⌈log_23μ d^α/2/2ϵ⌉+2} that approximates f∈ℋ_μ^α([0,1]^d) within a given tolerance ϵ >0 measured in L^p norm p∈[1,∞), where ℋ_μ^α([0,1]^d) denotes the Hölder continuous function class defined on [0,1]^d with order α∈ (0,1] and constant μ > 0. Therefore, the ReLU-sine-2^x networks overcome the curse of dimensionality on ℋ_μ^α([0,1]^d). In addition to its supper expressive power, functions implemented by ReLU-sine-2^x networks are (generalized) differentiable, enabling us to apply SGD to train.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2020

Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth

A new network with super approximation power is introduced. This network...
research
02/28/2021

Optimal Approximation Rate of ReLU Networks in terms of Width and Depth

This paper concentrates on the approximation power of deep feed-forward ...
research
10/25/2020

Neural Network Approximation: Three Hidden Layers Are Enough

A three-hidden-layer neural network with super approximation power is in...
research
04/24/2022

Piecewise-Linear Activations or Analytic Activation Functions: Which Produce More Expressive Neural Networks?

Many currently available universal approximation theorems affirm that de...
research
07/21/2021

Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations

We present polynomial time and sample efficient algorithms for learning ...
research
04/09/2019

Approximation in L^p(μ) with deep ReLU neural networks

We discuss the expressive power of neural networks which use the non-smo...
research
06/22/2019

The phase diagram of approximation rates for deep neural networks

We explore the phase diagram of approximation rates for deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset