Backpropagation and F-adjoint

03/29/2023
by   Ahmed Boughammoura, et al.
0

This paper presents a concise mathematical framework for investigating both feed-forward and backward process, during the training to learn model weights, of an artificial neural network (ANN). Inspired from the idea of the two-step rule for backpropagation, we define a notion of F-adjoint which is aimed at a better description of the backpropagation algorithm. In particular, by introducing the notions of F-propagation and F-adjoint through a deep neural network architecture, the backpropagation associated to a cost/loss function is proven to be completely characterized by the F-adjoint of the corresponding F-propagation relatively to the partial derivative, with respect to the inputs, of the cost function.

READ FULL TEXT
research
03/17/2023

A Two-Step Rule for Backpropagation

We present a simplified computational rule for the back-propagation form...
research
06/13/2018

Apuntes de Redes Neuronales Artificiales

These handouts are designed for people who is just starting involved wit...
research
06/18/2020

Accelerating Training in Artificial Neural Networks with Dynamic Mode Decomposition

Training of deep neural networks (DNNs) frequently involves optimizing s...
research
01/22/2023

The Backpropagation algorithm for a math student

A Deep Neural Network (DNN) is a composite function of vector-valued fun...
research
09/12/2012

Training a Feed-forward Neural Network with Artificial Bee Colony Based Backpropagation Method

Back-propagation algorithm is one of the most widely used and popular te...
research
10/06/2021

CBP: Backpropagation with constraint on weight precision using a pseudo-Lagrange multiplier method

Backward propagation of errors (backpropagation) is a method to minimize...
research
10/02/2022

Belief propagation generalizes backpropagation

The two most important algorithms in artificial intelligence are backpro...

Please sign up or login with your details

Forgot password? Click here to reset