Adversarial VC-dimension and Sample Complexity of Neural Networks

12/18/2019
by   Zetong Qi, et al.
18

Adversarial attacks during the testing phase of neural networks pose a challenge for the deployment of neural networks in security critical settings. These attacks can be performed by adding noise that is imperceptible to humans on top of the original data. By doing so, an attacker can create an adversarial sample, which will cause neural networks to misclassify. In this paper, we seek to understand the theoretical limits of what can be learned by neural networks in the presence of an adversary. We first defined the hypothesis space of a neural network, and showed the relationship between the growth number of the entire neural network and the growth number of each neuron. Combine that with the adversarial Vapnik-Chervonenkis(VC)-dimension of halfspace classifiers, we concluded the adversarial VC-dimension of the neural networks with sign activation functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2018

PAC-learning in the presence of evasion adversaries

The existence of evasion attacks during the test phase of machine learni...
research
06/13/2023

Finite Gaussian Neurons: Defending against adversarial attacks by making neural networks say "I don't know"

Since 2014, artificial neural networks have been known to be vulnerable ...
research
01/25/2019

When Can Neural Networks Learn Connected Decision Regions?

Previous work has questioned the conditions under which the decision reg...
research
03/18/2022

Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike

We propose to generate adversarial samples by modifying activations of u...
research
02/25/2019

Adversarial attacks hidden in plain sight

Convolutional neural networks have been used to achieve a string of succ...
research
12/07/2019

Principal Component Properties of Adversarial Samples

Deep Neural Networks for image classification have been found to be vuln...
research
09/23/2021

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

Neural network implementations are known to be vulnerable to physical at...

Please sign up or login with your details

Forgot password? Click here to reset