Decomposing reverse-mode automatic differentiation

05/20/2021
by   Roy Frostig, et al.
0

We decompose reverse-mode automatic differentiation into (forward-mode) linearization followed by transposition. Doing so isolates the essential difference between forward- and reverse-mode AD, and simplifies their joint implementation. In particular, once forward-mode AD rules are defined for every primitive operation in a source language, only linear primitives require an additional transposition rule in order to arrive at a complete reverse-mode AD implementation. This is how reverse-mode AD is written in JAX and Dex.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

02/06/2018

Automatic differentiation of ODE integration

We discuss the calculation of the derivatives of ODE systems with the au...
02/19/2020

A Differential-form Pullback Programming Language for Higher-order Reverse-mode Automatic Differentiation

Building on the observation that reverse-mode automatic differentiation ...
10/18/2018

Dynamic Automatic Differentiation of GPU Broadcast Kernels

We show how forward-mode automatic differentiation (AD) can be employed ...
12/14/2021

Verifying a Minimalist Reverse-Mode AD Library

By exploiting a number of relatively subtle programming language feature...
07/26/2016

Forward-Mode Automatic Differentiation in Julia

We present ForwardDiff, a Julia package for forward-mode automatic diffe...
11/02/2021

Source-to-Source Automatic Differentiation of OpenMP Parallel Loops

This paper presents our work toward correct and efficient automatic diff...
08/22/2017

Divide-and-Conquer Checkpointing for Arbitrary Programs with No User Annotation

Classical reverse-mode automatic differentiation (AD) imposes only a sma...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.