Growing axons: greedy learning of neural networks with application to function approximation

10/28/2019
by   Daria Fokina, et al.
25

We propose a new method for learning deep neural network models that is based on a greedy learning approach: we add one basis function at a time, and a new basis function is generated as a non-linear activation function applied to a linear combination of the previous basis functions. Such a method (growing deep neural network by one neuron at a time) allows us to compute much more accurate approximants for several model problems in function approximation.

READ FULL TEXT
research
11/21/2019

DeepLABNet: End-to-end Learning of Deep Radial Basis Networks with Fully Learnable Basis Functions

From fully connected neural networks to convolutional neural networks, t...
research
01/11/2022

Deep Neural Network Approximation For Hölder Functions

In this work, we explore the approximation capability of deep Rectified ...
research
10/19/2022

A new activation for neural networks and its approximation

Deep learning with deep neural networks (DNNs) has attracted tremendous ...
research
10/30/2012

Hierarchical Learning Algorithm for the Beta Basis Function Neural Network

The paper presents a two-level learning method for the design of the Bet...
research
09/07/2022

A Greedy Algorithm for Building Compact Binary Activated Neural Networks

We study binary activated neural networks in the context of regression t...
research
02/01/2021

Basis Function Based Data Driven Learning for the Inverse Problem of Electrocardiography

Objective: This paper proposes an neural network approach for predicting...
research
10/31/2018

Adaptive Extreme Learning Machine for Recurrent Beta-basis Function Neural Network Training

Beta Basis Function Neural Network (BBFNN) is a special kind of kernel b...

Please sign up or login with your details

Forgot password? Click here to reset