A Derivation of Feedforward Neural Network Gradients Using Fréchet Calculus

09/27/2022
by   Thomas Hamm, et al.
0

We present a derivation of the gradients of feedforward neural networks using Fréchet calculus which is arguably more compact than the ones usually presented in the literature. We first derive the gradients for ordinary neural networks working on vectorial data and show how these derived formulas can be used to derive a simple and efficient algorithm for calculating a neural networks gradients. Subsequently we show how our analysis generalizes to more general neural network architectures including, but not limited to, convolutional networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2020

Expressivity of Deep Neural Networks

In this review paper, we give a comprehensive overview of the large vari...
research
04/21/2023

Gradient Derivation for Learnable Parameters in Graph Attention Networks

This work provides a comprehensive derivation of the parameter gradients...
research
11/17/2017

A unified deep artificial neural network approach to partial differential equations in complex geometries

We use deep feedforward artificial neural networks to approximate soluti...
research
10/16/2019

Path homologies of deep feedforward networks

We provide a characterization of two types of directed homology for full...
research
02/08/2021

Derivation of the Backpropagation Algorithm Based on Derivative Amplification Coefficients

The backpropagation algorithm for neural networks is widely felt hard to...
research
10/16/2018

Feedforward Neural Networks for Caching: Enough or Too Much?

We propose a caching policy that uses a feedforward neural network (FNN)...
research
09/05/2017

Deep learning: Technical introduction

This note presents in a technical though hopefully pedagogical way the t...

Please sign up or login with your details

Forgot password? Click here to reset