Using Linear Regression for Iteratively Training Neural Networks

07/11/2023
by   Harshad Khadilkar, et al.
0

We present a simple linear regression based approach for learning the weights and biases of a neural network, as an alternative to standard gradient based backpropagation. The present work is exploratory in nature, and we restrict the description and experiments to (i) simple feedforward neural networks, (ii) scalar (single output) regression problems, and (iii) invertible activation functions. However, the approach is intended to be extensible to larger, more complex architectures. The key idea is the observation that the input to every neuron in a neural network is a linear combination of the activations of neurons in the previous layer, as well as the parameters (weights and biases) of the layer. If we are able to compute the ideal total input values to every neuron by working backwards from the output, we can formulate the learning problem as a linear least squares problem which iterates between updating the parameters and the activation values. We present an explicit algorithm that implements this idea, and we show that (at least for small problems) the approach is more stable and faster than gradient-based methods.

READ FULL TEXT
research
01/13/2023

Neural network with optimal neuron activation functions based on additive Gaussian process regression

Feed-forward neural networks (NN) are a staple machine learning method w...
research
09/21/2021

Neural networks with trainable matrix activation functions

The training process of neural networks usually optimize weights and bia...
research
10/28/2020

Estimating Multiplicative Relations in Neural Networks

Universal approximation theorem suggests that a shallow neural network c...
research
04/10/2017

Learning Important Features Through Propagating Activation Differences

The purported "black box"' nature of neural networks is a barrier to ado...
research
07/10/2023

Self Expanding Neural Networks

The results of training a neural network are heavily dependent on the ar...
research
01/18/2021

Stable Recovery of Entangled Weights: Towards Robust Identification of Deep Neural Networks from Minimal Samples

In this paper we approach the problem of unique and stable identifiabili...
research
04/01/2019

On the Power and Limitations of Random Features for Understanding Neural Networks

Recently, a spate of papers have provided positive theoretical results f...

Please sign up or login with your details

Forgot password? Click here to reset