Efficient and Transferable Adversarial Examples from Bayesian Neural Networks

11/10/2020
by   Martin Gubri, et al.
0

Deep neural networks are vulnerable to evasion attacks, i.e., carefully crafted examples designed to fool a model at test time. Attacks that successfully evade an ensemble of models can transfer to other independently trained models, which proves useful in black-box settings. Unfortunately, these methods involve heavy computation costs to train the models forming the ensemble. To overcome this, we propose a new method to generate transferable adversarial examples efficiently. Inspired by Bayesian deep learning, our method builds such ensembles by sampling from the posterior distribution of neural network weights during a single training process. Experiments on CIFAR-10 show that our approach improves the transfer rates significantly at equal or even lower computation costs. Intra-architecture transfer rate is increased by 23 4 times less training epochs. In the inter-architecture case, we show that we can combine our method with ensemble-based attacks to increase their transfer rate by up to 15

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2017

Training Ensembles to Detect Adversarial Examples

We propose a new ensemble method for detecting and classifying adversari...
research
12/02/2019

Deep Neural Network Fingerprinting by Conferrable Adversarial Examples

In Machine Learning as a Service, a provider trains a deep neural networ...
research
03/28/2020

DaST: Data-free Substitute Training for Adversarial Attacks

Machine learning models are vulnerable to adversarial examples. For the ...
research
11/22/2017

Adversarial Phenomenon in the Eyes of Bayesian Deep Learning

Deep Learning models are vulnerable to adversarial examples, i.e. images...
research
05/17/2020

Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks

We aim at demonstrating the influence of diversity in the ensemble of CN...
research
06/27/2022

Transfer learning for ensembles: reducing computation time and keeping the diversity

Transferring a deep neural network trained on one problem to another req...
research
07/21/2023

Improving Transferability of Adversarial Examples via Bayesian Attacks

This paper presents a substantial extension of our work published at ICL...

Please sign up or login with your details

Forgot password? Click here to reset