Adversarial robustness guarantees for random deep neural networks

04/13/2020
by   Giacomo De Palma, et al.
0

The reliability of most deep learning algorithms is fundamentally challenged by the existence of adversarial examples, which are incorrectly classified inputs that are extremely close to a correctly classified input. We study adversarial examples for deep neural networks with random weights and biases and prove that the ℓ^1 distance of any given input from the classification boundary scales at least as √(n), where n is the dimension of the input. We also extend our proof to cover all the ℓ^p norms. Our results constitute a fundamental advance in the study of adversarial examples, and encompass a wide variety of architectures, which include any combination of convolutional or fully connected layers with skipped connections and pooling. We validate our results with experiments on both random deep neural networks and deep neural networks trained on the MNIST and CIFAR10 datasets. Given the results of our experiments on MNIST and CIFAR10, we conjecture that the proof of our adversarial robustness guarantee can be extended to trained deep neural networks. This extension will open the way to a thorough theoretical study of neural network robustness by classifying the relation between network architecture and adversarial distance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2020

Input Validation for Neural Networks via Runtime Local Robustness Verification

Local robustness verification can verify that a neural network is robust...
research
12/04/2020

Towards Natural Robustness Against Adversarial Examples

Recent studies have shown that deep neural networks are vulnerable to ad...
research
12/17/2018

Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings

Deep learning models are vulnerable to adversarial examples which are in...
research
09/08/2017

Towards Proving the Adversarial Robustness of Deep Neural Networks

Autonomous vehicles are highly complex systems, required to function rel...
research
10/29/2021

ε-weakened Robustness of Deep Neural Networks

This paper introduces a notation of ε-weakened robustness for analyzing ...
research
12/25/2018

Deep neural networks are biased towards simple functions

We prove that the binary classifiers of bit strings generated by random ...
research
08/21/2018

zoNNscan : a boundary-entropy index for zone inspection of neural models

The training of deep neural network classifiers results in decision boun...

Please sign up or login with your details

Forgot password? Click here to reset