Hierarchical Verification for Adversarial Robustness

07/23/2020
by   Cong Han Lim, et al.
5

We introduce a new framework for the exact point-wise ℓ_p robustness verification problem that exploits the layer-wise geometric structure of deep feed-forward networks with rectified linear activations (ReLU networks). The activation regions of the network partition the input space, and one can verify the ℓ_p robustness around a point by checking all the activation regions within the desired radius. The GeoCert algorithm (Jordan et al., NeurIPS 2019) treats this partition as a generic polyhedral complex in order to detect which region to check next. In contrast, our LayerCert framework considers the nested hyperplane arrangement structure induced by the layers of the ReLU network and explores regions in a hierarchical manner. We show that, under certain conditions on the algorithm parameters, LayerCert provably reduces the number and size of the convex programs that one needs to solve compared to GeoCert. Furthermore, our LayerCert framework allows the incorporation of lower bounding routines based on convex relaxations to further improve performance. Experimental results demonstrate that LayerCert can significantly reduce both the number of convex programs solved and the running time over the state-of-the-art.

READ FULL TEXT
research
03/31/2021

Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks

Several current bounds on the maximal number of affine regions of a ReLU...
research
02/12/2020

Fast Geometric Projections for Local Robustness Certification

Local robustness ensures that a model classifies all inputs within an ϵ-...
research
11/02/2018

Efficient Neural Network Robustness Certification with General Activation Functions

Finding minimum distortion of adversarial examples and thus certifying r...
research
06/05/2018

A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks

In this work we present a new framework to derive upper bounds on the nu...
research
05/27/2019

Equivalent and Approximate Transformations of Deep Neural Networks

Two networks are equivalent if they produce the same output for any give...
research
08/21/2022

Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural Networks

The robustness of deep neural networks is crucial to modern AI-enabled s...
research
10/31/2014

Partition-wise Linear Models

Region-specific linear models are widely used in practical applications ...

Please sign up or login with your details

Forgot password? Click here to reset