Ensemble of Models Trained by Key-based Transformed Images for Adversarially Robust Defense Against Black-box Attacks

11/16/2020
by   MaungMaung AprilPyone, et al.
0

We propose a voting ensemble of models trained by using block-wise transformed images with secret keys for an adversarially robust defense. Key-based adversarial defenses were demonstrated to outperform state-of-the-art defenses against gradient-based (white-box) attacks. However, the key-based defenses are not effective enough against gradient-free (black-box) attacks without requiring any secret keys. Accordingly, we aim to enhance robustness against black-box attacks by using a voting ensemble of models. In the proposed ensemble, a number of models are trained by using images transformed with different keys and block sizes, and then a voting ensemble is applied to the models. In image classification experiments, the proposed defense is demonstrated to defend state-of-the-art attacks. The proposed defense achieves a clean accuracy of 95.56 attacks with a noise distance of 8/255 on the CIFAR-10 dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2020

Block-wise Image Transformation with Secret Key for Adversarially Robust Defense

In this paper, we propose a novel defensive transformation that enables ...
research
11/28/2020

Voting based ensemble improves robustness of defensive models

Developing robust models against adversarial perturbations has been an a...
research
03/21/2023

Black-box Backdoor Defense via Zero-shot Image Purification

Backdoor attacks inject poisoned data into the training set, resulting i...
research
10/31/2017

Countering Adversarial Images using Input Transformations

This paper investigates strategies that defend against adversarial-examp...
research
10/19/2020

RobustBench: a standardized adversarial robustness benchmark

Evaluation of adversarial robustness is often error-prone leading to ove...
research
04/23/2020

Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks

Effective defense of deep neural networks against adversarial attacks re...
research
10/03/2019

BUZz: BUffer Zones for defending adversarial examples in image classification

We propose a novel defense against all existing gradient based adversari...

Please sign up or login with your details

Forgot password? Click here to reset