Function Approximation with Randomly Initialized Neural Networks for Approximate Model Reference Adaptive Control

03/28/2023
by   Tyler Lekang, et al.
0

Classical results in neural network approximation theory show how arbitrary continuous functions can be approximated by networks with a single hidden layer, under mild assumptions on the activation function. However, the classical theory does not give a constructive means to generate the network parameters that achieve a desired accuracy. Recent results have demonstrated that for specialized activation functions, such as ReLUs and some classes of analytic functions, high accuracy can be achieved via linear combinations of randomly initialized activations. These recent works utilize specialized integral representations of target functions that depend on the specific activation functions used. This paper defines mollified integral representations, which provide a means to form integral representations of target functions using activations for which no direct integral representation is currently known. The new construction enables approximation guarantees for randomly initialized networks for a variety of widely used activation functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2021

Activation Functions in Artificial Neural Networks: A Systematic Overview

Activation functions shape the outputs of artificial neurons and, theref...
research
12/30/2021

A Unified and Constructive Framework for the Universality of Neural Networks

One of the reasons why many neural networks are capable of replicating c...
research
06/23/2019

Learning Activation Functions: A new paradigm of understanding Neural Networks

There has been limited research in the domain of activation functions, m...
research
02/24/2019

Adaptive Estimators Show Information Compression in Deep Neural Networks

To improve how neural networks function it is crucial to understand thei...
research
11/03/2020

Analytical aspects of non-differentiable neural networks

Research in computational deep learning has directed considerable effort...
research
02/07/2020

Translating Diffusion, Wavelets, and Regularisation into Residual Networks

Convolutional neural networks (CNNs) often perform well, but their stabi...
research
03/28/2020

Memorizing Gaussians with no over-parameterizaion via gradient decent on neural networks

We prove that a single step of gradient decent over depth two network, w...

Please sign up or login with your details

Forgot password? Click here to reset