Towards Fast Computation of Certified Robustness for ReLU Networks

04/25/2018
by   Tsui-Wei Weng, et al.
0

Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17]. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or delivering low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin and Fast-Lip) that are able to certify non-trivial lower bounds of minimum distortions, by bounding the ReLU units with appropriate linear functions (Fast-Lin), or by bounding the local Lipschitz constant (Fast-Lip). Experiments show that (1) our proposed methods deliver bounds close to (the gap is 2-3X) exact minimum distortion found by Reluplex in small MNIST networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35 larger networks compared to the methods based on solving linear programming problems but our algorithms are 33-14,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that, in fact, there is no polynomial time algorithm that can approximately find the minimum ℓ_1 adversarial distortion of a ReLU network with a 0.99 n approximation ratio unless NP=P, where n is the number of neurons in the network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2018

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

Verifying robustness of neural network classifiers has attracted great i...
research
02/15/2021

Scaling Up Exact Neural Network Compression by ReLU Stability

We can compress a neural network while exactly preserving its underlying...
research
11/02/2018

Efficient Neural Network Robustness Certification with General Activation Functions

Finding minimum distortion of adversarial examples and thus certifying r...
research
08/14/2020

Analytical bounds on the local Lipschitz constants of affine-ReLU functions

In this paper, we determine analytical bounds on the local Lipschitz con...
research
02/01/2019

Robustness Certificates Against Adversarial Examples for ReLU Networks

While neural networks have achieved high performance in different learni...
research
02/10/2020

Polynomial Optimization for Bounding Lipschitz Constants of Deep Networks

The Lipschitz constant of a network plays an important role in many appl...
research
05/27/2019

A Rate-Distortion Framework for Explaining Neural Network Decisions

We formalise the widespread idea of interpreting neural network decision...

Please sign up or login with your details

Forgot password? Click here to reset