Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network

10/01/2018
by   Xuanqing Liu, et al.
0

We present a new algorithm to train a robust neural network against adversarial attacks. Our algorithm is motivated by the following two ideas. First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks (Liu 2017), we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness. Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way. Second, we formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net. Experiment results demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks. On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement compared with adversarial training (Madry 2017) and random self-ensemble (Liu 2017) under PGD attack with 0.035 distortion, and the gap becomes even larger on a subset of ImageNet.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/01/2019

Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network"

A recent paper by Liu et al. combines the topics of adversarial training...
research
11/16/2021

Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks

Bayesian Neural Networks (BNNs), unlike Traditional Neural Networks (TNN...
research
10/17/2020

A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness

Stochastic Neural Networks (SNNs) that inject noise into their hidden la...
research
12/02/2017

Towards Robust Neural Networks via Random Self-ensemble

Recent studies have revealed the vulnerability of deep neural networks -...
research
05/04/2020

Guarantees on learning depth-2 neural networks under a data-poisoning attack

In recent times many state-of-the-art machine learning models have been ...
research
06/17/2021

Evaluating the Robustness of Bayesian Neural Networks Against Different Types of Attacks

To evaluate the robustness gain of Bayesian neural networks on image cla...
research
11/10/2021

Robust Learning via Ensemble Density Propagation in Deep Neural Networks

Learning in uncertain, noisy, or adversarial environments is a challengi...

Please sign up or login with your details

Forgot password? Click here to reset