Differentiable Implicit Layers

10/14/2020
by   Andreas Look, et al.
0

In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as a learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2022

Variational Neural Networks

Bayesian Neural Networks (BNNs) provide a tool to estimate the uncertain...
research
02/10/2023

DNArch: Learning Convolutional Neural Architectures by Backpropagation

We present Differentiable Neural Architectures (DNArch), a method that j...
research
04/16/2021

Signed Distance Function Computation from an Implicit Surface

We describe in this short note a technique to convert an implicit surfac...
research
06/16/2019

Pi-surfaces: products of implicit surfaces towards constructive composition of 3D objects

Implicit functions provide a fundamental basis to model 3D objects, no m...
research
06/01/2018

Backpropagation for Implicit Spectral Densities

Most successful machine intelligence systems rely on gradient-based lear...
research
07/31/2017

A note about Euler's inequality and automated reasoning with dynamic geometry

Using implicit loci in GeoGebra Euler's R≥ 2r inequality can be investig...
research
10/29/2021

FC2T2: The Fast Continuous Convolutional Taylor Transform with Applications in Vision and Graphics

Series expansions have been a cornerstone of applied mathematics and eng...

Please sign up or login with your details

Forgot password? Click here to reset