Backward-Forward Algorithm: An Improvement towards Extreme Learning Machine

07/24/2019
by   Dibyasundar Das, et al.
11

Extreme learning machine (ELM), a randomized learning paradigm for a single hidden layer feed-forward network, has gained significant attention for solving problems in diverse domains due to its faster learning ability. The output weights in ELM are determined by an analytic procedure, while the input weights and biases are randomly generated and fixed during the training phase. The learning performance of ELM is highly sensitive to many factors such as the number of nodes in the hidden layer, the initialization of input weight and the type of activation functions in the hidden layer. Moreover, the performance of ELM is affected due to the presence of random input weight and the model suffers from ill posed problem. Hence, here we propose a backward-forward algorithm for a single feed-forward neural network that improves the generalization capability of the network with fewer hidden nodes. Here, both input and output weights are determined mathematically which gives the network its performance advantages. The proposed model provides an improvement over extreme learning machine with respect to the number of nodes used for generalization.

READ FULL TEXT

page 1

page 3

page 4

page 6

research
11/22/2018

Conditioning Optimization of Extreme Learning Machine by Multitask Beetle Antennae Swarm Algorithm

Extreme learning machine (ELM) as a simple and rapid neural network has ...
research
11/04/2020

Rank Based Pseudoinverse Computation in Extreme Learning Machine for Large Datasets

Extreme Learning Machine (ELM) is an efficient and effective least-squar...
research
06/09/2012

A Connectionist Network Approach to Find Numerical Solutions of Diophantine Equations

The paper introduces a connectionist network approach to find numerical ...
research
10/25/2019

A Gegenbauer Neural Network with Regularized Weights Direct Determination for Classification

Single-hidden layer feed forward neural networks (SLFNs) are widely used...
research
03/03/2020

A Metric for Evaluating Neural Input Representation in Supervised Learning Networks

Supervised learning has long been attributed to several feed-forward neu...
research
05/23/2000

Applying MDL to Learning Best Model Granularity

The Minimum Description Length (MDL) principle is solidly based on a pro...
research
01/22/2018

Extreme Learning Machine with Local Connections

This paper is concerned with the sparsification of the input-hidden weig...

Please sign up or login with your details

Forgot password? Click here to reset