Joint inference and input optimization in equilibrium networks

11/25/2021
by   Swaminathan Gurumurthy, et al.
0

Many tasks in deep learning involve optimizing over the inputs to a network to minimize or maximize some objective; examples include optimization over latent spaces in a generative model to match a target image, or adversarially perturbing an input to worsen classifier performance. Performing such optimization, however, is traditionally quite costly, as it involves a complete forward and backward pass through the network for each gradient step. In a separate line of work, a recent thread of research has developed the deep equilibrium (DEQ) model, a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer. In this paper, we show that there is a natural synergy between these two settings. Although, naively using DEQs for these optimization problems is expensive (owing to the time needed to compute a fixed point for each gradient step), we can leverage the fact that gradient-based optimization can itself be cast as a fixed point iteration to substantially improve the overall speed. That is, we simultaneously both solve for the DEQ fixed point and optimize over network inputs, all within a single “augmented” DEQ model that jointly encodes both the original network and the optimization process. Indeed, the procedure is fast enough that it allows us to efficiently train DEQ models for tasks traditionally relying on an “inner” optimization loop. We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.

READ FULL TEXT

page 18

page 19

page 20

research
06/28/2021

Stabilizing Equilibrium Models by Jacobian Regularization

Deep equilibrium networks (DEQs) are a new class of models that eschews ...
research
02/07/2020

Differentiable Fixed-Point Iteration Layer

Recently, several studies proposed methods to utilize some restricted cl...
research
07/16/2019

Learning Multimodal Fixed-Point Weights using Gradient Descent

Due to their high computational complexity, deep neural networks are sti...
research
10/03/2022

WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration

Denoising diffusion probabilistic models (DDPMs) and generative adversar...
research
06/20/2023

Accelerating Generalized Random Forests with Fixed-Point Trees

Generalized random forests arXiv:1610.01271 build upon the well-establis...
research
10/23/2022

Deep Equilibrium Approaches to Diffusion Models

Diffusion-based generative models are extremely effective in generating ...
research
10/30/2021

Neural Network based on Automatic Differentiation Transformation of Numeric Iterate-to-Fixedpoint

This work proposes a Neural Network model that can control its depth usi...

Please sign up or login with your details

Forgot password? Click here to reset