Implicitly Defined Layers in Neural Networks

03/03/2020
by   Qianggong Zhang, et al.
19

In conventional formulations of multilayer feedforward neural networks, the individual layers are customarily defined by explicit functions. In this paper we demonstrate that defining individual layers in a neural network implicitly provide much richer representations over the standard explicit one, consequently enabling a vastly broader class of end-to-end trainable architectures. We present a general framework of implicitly defined layers, where much of the theoretical analysis of such layers can be addressed through the implicit function theorem. We also show how implicitly defined layers can be seamlessly incorporated into existing machine learning libraries. In particular with respect to current automatic differentiation techniques for use in backpropagation based training. Finally, we demonstrate the versatility and relevance of our proposed approach on a number of diverse example problems with promising results.

READ FULL TEXT
research
09/11/2019

Deep Declarative Networks: A New Hope

We introduce a new class of end-to-end learnable models wherein data pro...
research
03/01/2017

OptNet: Differentiable Optimization as a Layer in Neural Networks

This paper presents OptNet, a network architecture that integrates optim...
research
10/05/2021

Unifying AI Algorithms with Probabilistic Programming using Implicitly Defined Representations

We introduce Scruff, a new framework for developing AI systems using pro...
research
05/06/2022

Beyond backpropagation: implicit gradients for bilevel optimization

This paper reviews gradient-based techniques to solve bilevel optimizati...
research
08/22/2018

An Explicit Neural Network Construction for Piecewise Constant Function Approximation

We present an explicit construction for feedforward neural network (FNN)...
research
08/21/2016

Neural Networks and Chaos: Construction, Evaluation of Chaotic Networks, and Prediction of Chaos with Multilayer Feedforward Networks

Many research works deal with chaotic neural networks for various fields...
research
10/28/2019

On approximating ∇ f with neural networks

Consider a feedforward neural network ψ: R^d→R^d such that ψ≈∇ f, where ...

Please sign up or login with your details

Forgot password? Click here to reset