Optimal Input Gain: All You Need to Supercharge a Feed-Forward Neural Network

03/30/2023
by   Chinmay Rane, et al.
0

Linear transformation of the inputs alters the training performance of feed-forward networks that are otherwise equivalent. However, most linear transforms are viewed as a pre-processing operation separate from the actual training. Starting from equivalent networks, it is shown that pre-processing inputs using linear transformation are equivalent to multiplying the negative gradient matrix with an autocorrelation matrix per training iteration. Second order method is proposed to find the autocorrelation matrix that maximizes learning in a given iteration. When the autocorrelation matrix is diagonal, the method optimizes input gains. This optimal input gain (OIG) approach is used to improve two first-order two-stage training algorithms, namely back-propagation (BP) and hidden weight optimization (HWO), which alternately update the input weights and solve linear equations for output weights. Results show that the proposed OIG approach greatly enhances the performance of the first-order algorithms, often allowing them to rival the popular Levenberg-Marquardt approach with far less computation. It is shown that HWO is equivalent to BP with Whitening transformation applied to the inputs. HWO effectively combines Whitening transformation with learning. Thus, OIG improved HWO could be a significant building block to more complex deep learning architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2011

Improving the character recognition efficiency of feed forward BP neural network

This work is focused on improving the character recognition capability o...
research
06/23/2012

Analysis of a Nature Inspired Firefly Algorithm based Back-propagation Neural Network Training

Optimization algorithms are normally influenced by meta-heuristic approa...
research
03/28/2018

Feed-forward Uncertainty Propagation in Belief and Neural Networks

We propose a feed-forward inference method applicable to belief and neur...
research
11/05/2019

An improved transformation between Fibonacci FSRs and Galois FSRs

Feedback shift registers (FSRs), which have two configurations: Fibonacc...
research
12/24/2020

Sensitivity – Local Index to Control Chaoticity or Gradient Globally

In this paper, we propose a fully local index named "sensitivity" for ea...
research
04/27/2015

Optimal Convergence Rate in Feed Forward Neural Networks using HJB Equation

A control theoretic approach is presented in this paper for both batch a...
research
04/16/2018

MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes

Effective regularisation of neural networks is essential to combat overf...

Please sign up or login with your details

Forgot password? Click here to reset