PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units

09/09/2019
by   Bo Li, et al.
0

Deep neural network with rectified linear units (ReLU) is getting more and more popular recently. However, the derivatives of the function represented by a ReLU network are not continuous, which limit the usage of ReLU network to situations only when smoothness is not required. In this paper, we construct deep neural networks with rectified power units (RePU), which can give better approximations for smooth functions. Optimal algorithms are proposed to explicitly build neural networks with sparsely connected RePUs, which we call PowerNets, to represent polynomials with no approximation error. For general smooth functions, we first project the function to their polynomial approximations, then use the proposed algorithms to construct corresponding PowerNets. Thus, the error of best polynomial approximation provides an upper bound of the best RePU network approximation error. For smooth functions in higher dimensional Sobolev spaces, we use fast spectral transforms for tensor-product grid and sparse grid discretization to get polynomial approximations. Our constructive algorithms show clearly a close connection between spectral methods and deep neural networks: a PowerNet with n layers can exactly represent polynomials up to degree s^n, where s is the power of RePUs. The proposed PowerNets have potential applications in the situations where high-accuracy is desired or smoothness is required.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2019

ChebNet: Efficient and Stable Constructions of Deep Neural Networks with Rectified Power Units using Chebyshev Approximations

In a recent paper[B. Li, S. Tang and H. Yu, arXiv:1903.05858, to appear ...
research
01/19/2021

Can smooth graphons in several dimensions be represented by smooth graphons on [0,1]?

A graphon that is defined on [0,1]^d and is Hölder(α) continuous for som...
research
02/01/2020

A Corrective View of Neural Networks: Representation, Memorization and Learning

We develop a corrective mechanism for neural network approximation: the ...
research
05/29/2018

Representational Power of ReLU Networks and Polynomial Kernels: Beyond Worst-Case Analysis

There has been a large amount of interest, both in the past and particul...
research
06/20/2023

Polynomial approximation on disjoint segments and amplification of approximation

We construct explicit easily implementable polynomial approximations of ...
research
02/03/2021

Numerical Differentiation using local Chebyshev-Approximation

In applied mathematics, especially in optimization, functions are often ...
research
05/03/2017

Quantified advantage of discontinuous weight selection in approximations with deep neural networks

We consider approximations of 1D Lipschitz functions by deep ReLU networ...

Please sign up or login with your details

Forgot password? Click here to reset