Certified Invertibility in Neural Networks via Mixed-Integer Programming

01/27/2023
by   Tianqi Cui, et al.
0

Neural networks are notoriously vulnerable to adversarial attacks – small imperceptible perturbations that can change the network's output drastically. In the reverse direction, there may exist large, meaningful perturbations that leave the network's decision unchanged (excessive invariance, nonivertibility). We study the latter phenomenon in two contexts: (a) discrete-time dynamical system identification, as well as (b) calibration of the output of one neural network to the output of another (neural network matching). For ReLU networks and L_p norms (p=1,2,∞), we formulate these optimization problems as mixed-integer programs (MIPs) that apply to neural network approximators of dynamical systems. We also discuss the applicability of our results to invertibility certification in transformations between neural networks (e.g. at different levels of pruning).

READ FULL TEXT
research
07/06/2019

ReLU Networks as Surrogate Models in Mixed-Integer Linear Programs

We consider the embedding of piecewise-linear deep neural networks (ReLU...
research
08/19/2020

ReLU activated Multi-Layer Neural Networks trained with Mixed Integer Linear Programs

This paper is a case study to demonstrate that, in principle, multi-laye...
research
03/26/2022

Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding

The robustness of deep neural networks has received significant interest...
research
06/09/2021

ZoPE: A Fast Optimizer for ReLU Networks with Low-Dimensional Inputs

Deep neural networks often lack the safety and robustness guarantees nee...
research
11/20/2021

Modeling Design and Control Problems Involving Neural Network Surrogates

We consider nonlinear optimization problems that involve surrogate model...
research
11/28/2018

A randomized gradient-free attack on ReLU networks

It has recently been shown that neural networks but also other classifie...
research
07/18/2020

Abstraction based Output Range Analysis for Neural Networks

In this paper, we consider the problem of output range analysis for feed...

Please sign up or login with your details

Forgot password? Click here to reset