Decomposing reverse-mode automatic differentiation

05/20/2021
by   Roy Frostig, et al.
0

We decompose reverse-mode automatic differentiation into (forward-mode) linearization followed by transposition. Doing so isolates the essential difference between forward- and reverse-mode AD, and simplifies their joint implementation. In particular, once forward-mode AD rules are defined for every primitive operation in a source language, only linear primitives require an additional transposition rule in order to arrive at a complete reverse-mode AD implementation. This is how reverse-mode AD is written in JAX and Dex.

READ FULL TEXT

page 1

page 2

page 3

research
04/22/2022

You Only Linearize Once: Tangents Transpose to Gradients

Automatic differentiation (AD) is conventionally understood as a family ...
research
02/06/2018

Automatic differentiation of ODE integration

We discuss the calculation of the derivatives of ODE systems with the au...
research
02/19/2020

A Differential-form Pullback Programming Language for Higher-order Reverse-mode Automatic Differentiation

Building on the observation that reverse-mode automatic differentiation ...
research
12/28/2022

Reverse-Mode Automatic Differentiation of Compiled Programs

Tools for algorithmic differentiation (AD) provide accurate derivatives ...
research
10/18/2018

Dynamic Automatic Differentiation of GPU Broadcast Kernels

We show how forward-mode automatic differentiation (AD) can be employed ...
research
12/14/2021

Verifying a Minimalist Reverse-Mode AD Library

By exploiting a number of relatively subtle programming language feature...
research
07/26/2016

Forward-Mode Automatic Differentiation in Julia

We present ForwardDiff, a Julia package for forward-mode automatic diffe...

Please sign up or login with your details

Forgot password? Click here to reset