How Sampling Impacts the Robustness of Stochastic Neural Networks

04/22/2022
by   Sina Däubener, et al.
0

Stochastic neural networks (SNNs) are random functions and predictions are gained by averaging over multiple realizations of this random function. Consequently, an adversarial attack is calculated based on one set of samples and applied to the prediction defined by another set of samples. In this paper we analyze robustness in this setting by deriving a sufficient condition for the given prediction process to be robust against the calculated attack. This allows us to identify the factors that lead to an increased robustness of SNNs and helps to explain the impact of the variance and the amount of samples. Among other things, our theoretical analysis gives insights into (i) why increasing the amount of samples drawn for the estimation of adversarial examples increases the attack's strength, (ii) why decreasing sample size during inference hardly influences the robustness, and (iii) why a higher prediction variance between realizations relates to a higher robustness. We verify the validity of our theoretical findings by an extensive empirical analysis.

READ FULL TEXT
research
01/31/2018

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

The robustness of neural networks to adversarial examples has received g...
research
12/17/2018

Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings

Deep learning models are vulnerable to adversarial examples which are in...
research
03/30/2023

Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness

Neural networks have been proven to be both highly effective within comp...
research
05/31/2020

Evaluations and Methods for Explanation through Robustness Analysis

Among multiple ways of interpreting a machine learning model, measuring ...
research
08/17/2021

When Should You Defend Your Classifier – A Game-theoretical Analysis of Countermeasures against Adversarial Examples

Adversarial machine learning, i.e., increasing the robustness of machine...
research
03/22/2022

On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes

Neural networks are known to be highly sensitive to adversarial examples...

Please sign up or login with your details

Forgot password? Click here to reset