JacNet: Learning Functions with Structured Jacobians

by   Jonathan Lorraine, et al.

Neural networks are trained to learn an approximate mapping from an input domain to a target domain. Often, incorporating prior knowledge about the true mapping is critical to learning a useful approximation. With current architectures, it is difficult to enforce structure on the derivatives of the input-output mapping. We propose to directly learn the Jacobian of the input-output function with a neural network, which allows easy control of derivative. We focus on structuring the derivative to allow invertibility, and also demonstrate other useful priors can be enforced, such as k-Lipschitz. Using this approach, we are able to learn approximations to simple functions which are guaranteed to be invertible, and easily compute the inverse. We also show a similar results for 1-Lipschitz functions.



page 1

page 2

page 3

page 4


Lipschitz bijections between boolean functions

We answer four questions from a recent paper of Rao and Shinkar on Lipsc...

Informed Equation Learning

Distilling data into compact and interpretable analytic equations is one...

Reachability analysis of neural networks using mixed monotonicity

This paper presents a new reachability analysis tool to compute an inter...

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

Ordinary supervised learning is useful when we have paired training data...

Action-Sensitive Phonological Dependencies

This paper defines a subregular class of functions called the tier-based...

Recovering Pairwise Interactions Using Neural Networks

Recovering pairwise interactions, i.e. pairs of input features whose joi...

Robust and Provably Monotonic Networks

The Lipschitz constant of the map between the input and output space rep...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.