Deep Equilibrium Models

09/03/2019
by   Shaojie Bai, et al.
10

We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective "depth" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements as existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88 our experiments. The code is available at https://github. com/locuslab/deq .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2021

Stabilizing Equilibrium Models by Jacobian Regularization

Deep equilibrium networks (DEQs) are a new class of models that eschews ...
research
10/15/2018

Trellis Networks for Sequence Modeling

We present trellis networks, a new architecture for sequence modeling. O...
research
04/18/2022

Deep Equilibrium Optical Flow Estimation

Many recent state-of-the-art (SOTA) optical flow models use finite-step ...
research
06/15/2020

Multiscale Deep Equilibrium Models

We propose a new class of implicit networks, the multiscale deep equilib...
research
07/15/2022

Stable Invariant Models via Koopman Spectra

Weight-tied models have attracted attention in the modern development of...
research
09/19/2022

State-driven Implicit Modeling for Sparsity and Robustness in Neural Networks

Implicit models are a general class of learning models that forgo the hi...
research
07/03/2023

Rockmate: an Efficient, Fast, Automatic and Generic Tool for Re-materialization in PyTorch

We propose Rockmate to control the memory requirements when training PyT...

Please sign up or login with your details

Forgot password? Click here to reset