Robust Implicit Networks via Non-Euclidean Contractions

06/06/2021
by   Saber Jafarpour, et al.
0

Implicit neural networks, a.k.a., deep equilibrium networks, are a class of implicit-depth learning models where function evaluation is performed by solving a fixed point equation. They generalize classic feedforward models and are equivalent to infinite-depth weight-tied feedforward networks. While implicit models show improved accuracy and significant reduction in memory consumption, they can suffer from ill-posedness and convergence instability. This paper provides a new framework to design well-posed and robust implicit neural networks based upon contraction theory for the non-Euclidean norm ℓ_∞. Our framework includes (i) a novel condition for well-posedness based on one-sided Lipschitz constants, (ii) an average iteration for computing fixed-points, and (iii) explicit estimates on input-output Lipschitz constants. Additionally, we design a training problem with the well-posedness condition and the average iteration as constraints and, to achieve robust models, with the input-output Lipschitz constant as a regularizer. Our ℓ_∞ well-posedness condition leads to a larger polytopic training search space than existing conditions and our average iteration enjoys accelerated convergence. Finally, we perform several numerical experiments for function estimation and digit classification through the MNIST data set. Our numerical results demonstrate improved accuracy and robustness of the implicit models with smaller input-output Lipschitz bounds.

READ FULL TEXT

page 25

page 26

page 28

page 30

research
08/08/2022

Robust Training and Verification of Implicit Neural Networks: A Non-Euclidean Contractive Approach

This paper proposes a theoretical and computational framework for traini...
research
08/17/2019

Implicit Deep Learning

We define a new class of "implicit" deep learning prediction rules that ...
research
03/23/2021

Fixed Point Networks: Implicit Depth Models with Jacobian-Free Backprop

A growing trend in deep learning replaces fixed depth models by approxim...
research
12/10/2021

Robustness Certificates for Implicit Neural Networks: A Mixed Monotone Contractive Approach

Implicit neural networks are a general class of learning models that rep...
research
04/01/2022

Comparative Analysis of Interval Reachability for Robust Implicit and Feedforward Neural Networks

We use interval reachability analysis to obtain robustness guarantees fo...
research
10/05/2020

Lipschitz Bounded Equilibrium Networks

This paper introduces new parameterizations of equilibrium neural networ...
research
04/30/2021

ModelGuard: Runtime Validation of Lipschitz-continuous Models

This paper presents ModelGuard, a sampling-based approach to runtime mod...

Please sign up or login with your details

Forgot password? Click here to reset