Optimal Convergence Rate in Feed Forward Neural Networks using HJB Equation

04/27/2015
by   Vipul Arora, et al.
0

A control theoretic approach is presented in this paper for both batch and instantaneous updates of weights in feed-forward neural networks. The popular Hamilton-Jacobi-Bellman (HJB) equation has been used to generate an optimal weight update law. The remarkable contribution in this paper is that closed form solutions for both optimal cost and weight update can be achieved for any feed-forward network using HJB equation in a simple yet elegant manner. The proposed approach has been compared with some of the existing best performing learning algorithms. It is found as expected that the proposed approach is faster in convergence in terms of computational time. Some of the benchmark test data such as 8-bit parity, breast cancer and credit approval, as well as 2D Gabor function have been used to validate our claims. The paper also discusses issues related to global optimization. The limitations of popular deterministic weight update laws are critiqued and the possibility of global optimization using HJB formulation is discussed. It is hoped that the proposed algorithm will bring in a lot of interest in researchers working in developing fast learning algorithms and global optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2012

Analysis of a Nature Inspired Firefly Algorithm based Back-propagation Neural Network Training

Optimization algorithms are normally influenced by meta-heuristic approa...
research
02/03/2010

Using CODEQ to Train Feed-forward Neural Networks

CODEQ is a new, population-based meta-heuristic algorithm that is a hybr...
research
05/12/2016

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi-Step-Ahead Predictions

This paper is dedicated to the long-term, or multi-step-ahead, time seri...
research
02/21/2023

On the Behaviour of Pulsed Qubits and their Application to Feed Forward Networks

In the last two decades, the combination of machine learning and quantum...
research
03/30/2023

Optimal Input Gain: All You Need to Supercharge a Feed-Forward Neural Network

Linear transformation of the inputs alters the training performance of f...
research
09/12/2012

Training a Feed-forward Neural Network with Artificial Bee Colony Based Backpropagation Method

Back-propagation algorithm is one of the most widely used and popular te...
research
04/01/2019

Sound source ranging using a feed-forward neural network with fitting-based early stopping

When a feed-forward neural network (FNN) is trained for source ranging i...

Please sign up or login with your details

Forgot password? Click here to reset