Asymptotic Properties of Neural Network Sieve Estimators

06/03/2019
by   Xiaoxi Shen, et al.
0

Neural networks are one of the most popularly used methods in machine learning and artificial intelligence nowadays. Due to the universal approximation theorem (Hornik et al. (1989)), a neural network with one hidden layer can approximate any continuous function on a compact support as long as the number of hidden units is sufficiently large. Statistically, a neural network can be classified into a nonlinear regression framework. However, if we consider it parametrically, due to the unidentifiability of the parameters, it is difficult to derive its asymptotic properties. Instead, we considered the estimation problem in a nonparametric regression framework and use the results from sieve estimation to establish the consistency, the rates of convergence and the asymptotic normality of the neural network estimators. We also illustrate the validity of the theories via simulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2020

A closer look at the approximation capabilities of neural networks

The universal approximation theorem, in one of its most general versions...
research
01/08/2023

Density estimation and regression analysis on S^d in the presence of measurement error

This paper studies density estimation and regression analysis with conta...
research
02/10/2019

An Algorithm for Approximating Continuous Functions on Compact Subsets with a Neural Network with one Hidden Layer

George Cybenko's landmark 1989 paper showed that there exists a feedforw...
research
02/11/2016

A Universal Approximation Theorem for Mixture of Experts Models

The mixture of experts (MoE) model is a popular neural network architect...
research
07/30/2020

Random Vector Functional Link Networks for Function Approximation on Manifolds

The learning speed of feed-forward neural networks is notoriously slow a...
research
05/17/2022

Bagged Polynomial Regression and Neural Networks

Series and polynomial regression are able to approximate the same functi...
research
04/15/2022

Universal approximation property of invertible neural networks

Invertible neural networks (INNs) are neural network architectures with ...

Please sign up or login with your details

Forgot password? Click here to reset