ChebNet: Efficient and Stable Constructions of Deep Neural Networks with Rectified Power Units using Chebyshev Approximations

11/07/2019
by   Shanshan Tang, et al.
0

In a recent paper[B. Li, S. Tang and H. Yu, arXiv:1903.05858, to appear on Commun. Comput. Phys. 2019], we show that deep neural networks with rectified power units (RePU) can give better approximation for sufficient smooth functions than those with rectified linear units by stably converting polynomial approximation given in power series into deep neural networks with optimal complexity and no approximation error. However, in practice, power series are not easy to compute. In this paper, we propose a new and more stable way to construct deep RePU neural networks using Chebyshev polynomial approximations. By using a hierarchical structure of Chebyshev polynomial approximation in frequency domain, we build efficient and stable deep neural network constructions. In theory, ChebNets and the deep RePU nets based on Power series have the same upper error bounds for general function approximations. But, numerically, ChebNets are much more stable. The constructed ChebNets can be further trained and obtain much better results than those obtained by training deep RePU nets constructed basing on power series.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2019

PowerNet: Efficient Representations of Polynomials and Smooth Functions by Deep Neural Networks with Rectified Power Units

Deep neural network with rectified linear units (ReLU) is getting more a...
research
01/13/2020

Approximation smooth and sparse functions by deep neural networks without saturation

Constructing neural networks for function approximation is a classical a...
research
12/04/2019

Analysis of Deep Neural Networks with Quasi-optimal polynomial approximation rates

We show the existence of a deep neural network capable of approximating ...
research
09/24/2015

Provable approximation properties for deep neural networks

We discuss approximation of functions using deep neural nets. Given a fu...
research
02/02/2016

Algorithms for Simultaneous Padé Approximations

We describe how to solve simultaneous Padé approximations over a power s...
research
10/27/2022

On the Approximation and Complexity of Deep Neural Networks to Invariant Functions

Recent years have witnessed a hot wave of deep neural networks in variou...
research
05/24/2019

A Polynomial-Based Approach for Architectural Design and Learning with Deep Neural Networks

In this effort we propose a novel approach for reconstructing multivaria...

Please sign up or login with your details

Forgot password? Click here to reset