Certified Invertibility in Neural Networks via Mixed-Integer Programming

01/27/2023
by   Tianqi Cui, et al.
0

Neural networks are notoriously vulnerable to adversarial attacks – small imperceptible perturbations that can change the network's output drastically. In the reverse direction, there may exist large, meaningful perturbations that leave the network's decision unchanged (excessive invariance, nonivertibility). We study the latter phenomenon in two contexts: (a) discrete-time dynamical system identification, as well as (b) calibration of the output of one neural network to the output of another (neural network matching). For ReLU networks and L_p norms (p=1,2,∞), we formulate these optimization problems as mixed-integer programs (MIPs) that apply to neural network approximators of dynamical systems. We also discuss the applicability of our results to invertibility certification in transformations between neural networks (e.g. at different levels of pruning).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset