Differentiable Implicit Layers

10/14/2020
by   Andreas Look, et al.
0

In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as a learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/10/2021

Differentiable Surface Rendering via Non-Differentiable Sampling

We present a method for differentiable rendering of 3D surfaces that sup...
06/06/2020

MeshSDF: Differentiable Iso-Surface Extraction

Geometric Deep Learning has recently made striking progress with the adv...
06/16/2019

Pi-surfaces: products of implicit surfaces towards constructive composition of 3D objects

Implicit functions provide a fundamental basis to model 3D objects, no m...
04/16/2021

Signed Distance Function Computation from an Implicit Surface

We describe in this short note a technique to convert an implicit surfac...
07/31/2017

A note about Euler's inequality and automated reasoning with dynamic geometry

Using implicit loci in GeoGebra Euler's R≥ 2r inequality can be investig...
06/01/2018

Backpropagation for Implicit Spectral Densities

Most successful machine intelligence systems rely on gradient-based lear...
05/03/2022

Adaptable Adapters

State-of-the-art pretrained NLP models contain a hundred million to tril...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.