Towards Certifying ℓ_∞ Robustness using Neural Networks with ℓ_∞-dist Neurons

02/10/2021
by   Bohang Zhang, et al.
2

It is well-known that standard neural networks, even with a high classification accuracy, are vulnerable to small ℓ_∞-norm bounded adversarial perturbations. Although many attempts have been made, most previous works either can only provide empirical verification of the defense to a particular attack method, or can only develop a certified guarantee of the model robustness in limited scenarios. In this paper, we seek for a new approach to develop a theoretically principled neural network that inherently resists ℓ_∞ perturbations. In particular, we design a novel neuron that uses ℓ_∞-distance as its basic operation (which we call ℓ_∞-dist neuron), and show that any neural network constructed with ℓ_∞-dist neurons (called ℓ_∞-dist net) is naturally a 1-Lipschitz function with respect to ℓ_∞-norm. This directly provides a rigorous guarantee of the certified robustness based on the margin of prediction outputs. We also prove that such networks have enough expressive power to approximate any 1-Lipschitz function with robust generalization guarantee. Our experimental results show that the proposed network is promising. Using ℓ_∞-dist nets as the basic building blocks, we consistently achieve state-of-the-art performance on commonly used datasets: 93.09 (ϵ=0.1) and 35.10

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2022

Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness

Designing neural networks with bounded Lipschitz constant is a promising...
research
03/24/2020

Adversarial Perturbations Fool Deepfake Detectors

This work uses adversarial perturbations to enhance deepfake images and ...
research
09/30/2020

A law of robustness for two-layers neural networks

We initiate the study of the inherent tradeoffs between the size of a ne...
research
08/18/2019

Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation

Deep neural networks are known to be fragile to small adversarial pertur...
research
02/16/2021

A Law of Robustness for Weight-bounded Neural Networks

Robustness of deep neural networks against adversarial perturbations is ...
research
08/20/2020

On ℓ_p-norm Robustness of Ensemble Stumps and Trees

Recent papers have demonstrated that ensemble stumps and trees could be ...
research
02/10/2022

Controlling the Complexity and Lipschitz Constant improves polynomial nets

While the class of Polynomial Nets demonstrates comparable performance t...

Please sign up or login with your details

Forgot password? Click here to reset