Shallow neural network representation of polynomials

08/17/2022
by   Aleksandr Beknazaryan, et al.
0

We show that d-variate polynomials of degree R can be represented on [0,1]^d as shallow neural networks of width 2(R+d)^d. Also, by SNN representation of localized Taylor polynomials of univariate C^β-smooth functions, we derive for shallow networks the minimax optimal rate of convergence, up to a logarithmic factor, to unknown univariate regression function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2023

Nonparametric regression using over-parameterized shallow ReLU neural networks

It is shown that over-parameterized neural networks can achieve minimax ...
research
12/20/2021

Integral representations of shallow neural network with Rectified Power Unit activation function

In this effort, we derive a formula for the integral representation of a...
research
10/13/2016

Why Deep Neural Networks for Function Approximation?

Recently there has been much interest in understanding why deep neural n...
research
11/03/2017

Counting Roots of Polynomials Over Prime Power Rings

Suppose p is a prime, t is a positive integer, and f∈Z[x] is a univariat...
research
12/16/2021

Approximation of functions with one-bit neural networks

This paper examines the approximation capabilities of coarsely quantized...
research
10/27/2022

Learning Single-Index Models with Shallow Neural Networks

Single-index models are a class of functions given by an unknown univari...
research
02/02/2020

An explicit univariate and radical parametrization of the septic proper Zolotarev polynomials in power form

The problem of determining an explicit one-parameter power form represen...

Please sign up or login with your details

Forgot password? Click here to reset