Backprop as Functor: A compositional perspective on supervised learning

11/28/2017
by   Brendan Fong, et al.
0

A supervised learning algorithm searches over a set of functions A → B parametrised by a space P to find the best approximation to some ideal function f A → B. It does this by taking examples (a,f(a)) ∈ A× B, and updating the parameter according to some rule. We define a category where these update rules may be composed, and show that gradient descent---with respect to a fixed step size and an error function satisfying a certain property---defines a monoidal functor from a category of parametrised functions to this category of update rules. This provides a structural perspective on backpropagation, as well as a broad generalisation of neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2019

LOSSGRAD: automatic learning rate in gradient descent

In this paper, we propose a simple, fast and easy to implement algorithm...
research
08/18/2020

When Hardness of Approximation Meets Hardness of Learning

A supervised learning algorithm has access to a distribution of labeled ...
research
05/06/2019

Characterizing the invariances of learning algorithms using category theory

Many learning algorithms have invariances: when their training data is t...
research
03/17/2021

Augmenting Supervised Learning by Meta-learning Unsupervised Local Rules

The brain performs unsupervised learning and (perhaps) simultaneous supe...
research
04/19/2019

Compositionality of Rewriting Rules with Conditions

We extend the notion of compositional associative rewriting as recently ...
research
03/01/2021

Learners' languages

In "Backprop as functor", the authors show that the fundamental elements...
research
10/04/2020

New Insights on Learning Rules for Hopfield Networks: Memory and Objective Function Minimisation

Hopfield neural networks are a possible basis for modelling associative ...

Please sign up or login with your details

Forgot password? Click here to reset