Efficient Neural Network Robustness Certification with General Activation Functions

11/02/2018
by   Huan Zhang, et al.
8

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem. Nevertheless, recently it has been shown to be possible to give a non-trivial certified lower bound of minimum adversarial distortion, and some recent progress has been made towards this direction by exploiting the piece-wise linear nature of ReLU activations. However, a generic robustness certification for general activation functions still remains largely unexplored. To address this issue, in this paper we introduce CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points. The novelty in our algorithm consists of bounding a given activation function with linear and quadratic functions, hence allowing it to tackle general activation functions including but not limited to four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we facilitate the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation. Experimental results show that CROWN on ReLU networks can notably improve the certified lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while having comparable computational efficiency. Furthermore, CROWN also demonstrates its effectiveness and flexibility on networks with general activation functions, including tanh, sigmoid and arctan.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2018

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

Verifying robustness of neural network classifiers has attracted great i...
research
09/09/2021

ErfAct and PSerf: Non-monotonic smooth trainable Activation Functions

An activation function is a crucial component of a neural network that i...
research
01/31/2022

LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network Activation Functions

The most scalable approaches to certifying neural network robustness dep...
research
09/09/2022

Fast Neural Kernel Embeddings for General Activations

Infinite width limit has shed light on generalization and optimization a...
research
04/25/2018

Towards Fast Computation of Certified Robustness for ReLU Networks

Verifying the robustness property of a general Rectified Linear Unit (Re...
research
07/23/2020

Hierarchical Verification for Adversarial Robustness

We introduce a new framework for the exact point-wise ℓ_p robustness ver...
research
05/22/2023

DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using Bernstein Polynomial Activations and Precise Bound Propagation

Formal certification of Neural Networks (NNs) is crucial for ensuring th...

Please sign up or login with your details

Forgot password? Click here to reset