An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence

06/19/2012
by   Sudarshan Nandy, et al.
0

The present work deals with an improved back-propagation algorithm based on Gauss-Newton numerical optimization method for fast convergence. The steepest descent method is used for the back-propagation. The algorithm is tested using various datasets and compared with the steepest descent back-propagation algorithm. In the system, optimization is carried out using multilayer neural network. The efficacy of the proposed method is observed during the training period as it converges quickly for the dataset used in test. The requirement of memory for computing the steps of algorithm is also analyzed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2012

Analysis of a Nature Inspired Firefly Algorithm based Back-propagation Neural Network Training

Optimization algorithms are normally influenced by meta-heuristic approa...
research
09/12/2012

Training a Feed-forward Neural Network with Artificial Bee Colony Based Backpropagation Method

Back-propagation algorithm is one of the most widely used and popular te...
research
12/01/2017

Susceptibility Propagation by Using Diagonal Consistency

A susceptibility propagation that is constructed by combining a belief p...
research
08/08/2018

Backprop Evolution

The back-propagation algorithm is the cornerstone of deep learning. Desp...
research
11/01/2021

Free Probability, Newton lilypads and Jacobians of neural networks

Gradient descent during the learning process of a neural network can be ...
research
06/28/2016

Alternating Back-Propagation for Generator Network

This paper proposes an alternating back-propagation algorithm for learni...
research
10/10/2007

An Affinity Propagation Based method for Vector Quantization Codebook Design

In this paper, we firstly modify a parameter in affinity propagation (AP...

Please sign up or login with your details

Forgot password? Click here to reset