Deterministic Gaussian Averaged Neural Networks

06/10/2020
by   Ryan Campbell, et al.
4

We present a deterministic method to compute the Gaussian average of neural networks used in regression and classification. Our method is based on an equivalence between training with a particular regularized loss, and the expected values of Gaussian averages. We use this equivalence to certify models which perform well on clean data but are not robust to adversarial perturbations. In terms of certified accuracy and adversarial robustness, our method is comparable to known stochastic methods such as randomized smoothing, but requires only a single model evaluation during inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2020

Adversarial Boot Camp: label free certified robustness in one epoch

Machine learning models are vulnerable to adversarial attacks. One appro...
research
02/18/2022

Stochastic Perturbations of Tabular Features for Non-Deterministic Inference with Automunge

Injecting gaussian noise into training features is well known to have re...
research
11/17/2019

Smoothed Inference for Adversarially-Trained Models

Deep neural networks are known to be vulnerable to inputs with malicious...
research
12/24/2022

Frequency Regularization for Improving Adversarial Robustness

Deep neural networks are incredibly vulnerable to crafted, human-imperce...
research
02/17/2020

Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness

Recently smoothing deep neural network based classifiers via isotropic G...
research
07/22/2022

Training Certifiably Robust Neural Networks Against Semantic Perturbations

Semantic image perturbations, such as scaling and rotation, have been sh...
research
06/01/2021

The Gaussian equivalence of generative models for learning with shallow neural networks

Understanding the impact of data structure on the computational tractabi...

Please sign up or login with your details

Forgot password? Click here to reset