DeepAI AI Chat
Log In Sign Up

Reverse AD at Higher Types: Pure, Principled and Denotationally Correct

by   Matthijs Vákár, et al.

We show how to define source-code transformations for forward- and reverse-mode Automatic Differentiation on a standard higher-order functional language. The transformations generate purely functional code, and they are principled in the sense that their definition arises from a categorical universal property. We give a semantic proof of correctness of the transformations. In their most elegant formulation, the transformations generate code with linear types. However, we demonstrate how the transformations can be implemented in a standard functional language without sacrificing correctness. To do so, we make use of abstract data types to represent the required linear types, e.g. through the use of a basic module system.


page 1

page 2

page 3

page 4


CHAD: Combinatory Homomorphic Automatic Differentiation

We introduce Combinatory Homomorphic Automatic Differentiation (CHAD), a...

Efficient Dual-Numbers Reverse AD via Well-Known Program Transformations

Where dual-numbers forward-mode automatic differentiation (AD) pairs eac...

CHAD for Expressive Total Languages

We show how to apply forward and reverse mode Combinatory Homomorphic Au...

Correctness of Automatic Differentiation via Diffeologies and Categorical Gluing

We present semantic correctness proofs of Automatic Differentiation (AD)...

Automatic Differentiation for ML-family languages: correctness via logical relations

We give a simple, direct and reusable logical relations technique for la...

Differentiating a Tensor Language

How does one compile derivatives of tensor programs, such that the resul...

Do Mutable Variables Have Reference Types?

Implicit heterogeneous metaprogramming (a.k.a. offshoring) is an attract...