Network Moments: Extensions and Sparse-Smooth Attacks

06/21/2020
by   Modar Alfadly, et al.
16

The impressive performance of deep neural networks (DNNs) has immensely strengthened the line of research that aims at theoretically analyzing their effectiveness. This has incited research on the reaction of DNNs to noisy input, namely developing adversarial input attacks and strategies that lead to robust DNNs to these attacks. To that end, in this paper, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input. In particular, we generalize the second-moment expression of Bibi et al. to arbitrary input Gaussian distributions, dropping the zero-mean assumption. We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates as compared to the preliminary results of Bibi et al. Moreover, we experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, where we investigate the effect of the linearization sensitivity on the accuracy of the moment estimates. Lastly, we show that the derived expressions can be used to construct sparse and smooth Gaussian adversarial attacks (targeted and non-targeted) that tend to lead to perceptually feasible input attacks.

READ FULL TEXT

page 1

page 7

page 8

page 9

page 10

page 11

page 12

page 15

research
04/24/2019

Analytical Moment Regularizer for Gaussian Robust Networks

Despite the impressive performance of deep neural networks (DNNs) on num...
research
05/28/2019

Probabilistically True and Tight Bounds for Robust Deep Neural Network Training

Training Deep Neural Networks (DNNs) that are robust to norm bounded adv...
research
08/12/2023

Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks

Deep neural networks (DNNs) have gained prominence in various applicatio...
research
03/11/2018

Combating Adversarial Attacks Using Sparse Representations

It is by now well-known that small adversarial perturbations can induce ...
research
01/07/2021

The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks

It is desirable, and often a necessity, for machine learning models to b...
research
01/30/2019

A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance

The existence of adversarial examples in which an imperceptible change i...
research
05/25/2019

Adversarial Distillation for Ordered Top-k Attacks

Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, espec...

Please sign up or login with your details

Forgot password? Click here to reset