A New Ensemble Method for Concessively Targeted Multi-model Attack

12/19/2019
by   Ziwen He, et al.
0

It is well known that deep learning models are vulnerable to adversarial examples crafted by maliciously adding perturbations to original inputs. There are two types of attacks: targeted attack and non-targeted attack, and most researchers often pay more attention to the targeted adversarial examples. However, targeted attack has a low success rate, especially when aiming at a robust model or under a black-box attack protocol. In this case, non-targeted attack is the last chance to disable AI systems. Thus, in this paper, we propose a new attack mechanism which performs the non-targeted attack when the targeted attack fails. Besides, we aim to generate a single adversarial sample for different deployed models of the same task, e.g. image classification models. Hence, for this practical application, we focus on attacking ensemble models by dividing them into two groups: easy-to-attack and robust models. We alternately attack these two groups of models in the non-targeted or targeted manner. We name it a bagging and stacking ensemble (BAST) attack. The BAST attack can generate an adversarial sample that fails multiple models simultaneously. Some of the models classify the adversarial sample as a target label, and other models which are not attacked successfully may give wrong labels at least. The experimental results show that the proposed BAST attack outperforms the state-of-the-art attack methods on the new defined criterion that considers both targeted and non-targeted attack performance.

READ FULL TEXT

page 1

page 5

research
03/07/2022

Art-Attack: Black-Box Adversarial Attack via Evolutionary Art

Deep neural networks (DNNs) have achieved state-of-the-art performance i...
research
12/13/2022

Object-fabrication Targeted Attack for Object Detection

Recent researches show that the deep learning based object detection is ...
research
10/15/2019

Adversarial Examples for Models of Code

We introduce a novel approach for attacking trained models of code with ...
research
11/13/2018

Deep Q learning for fooling neural networks

Deep learning models are vulnerable to external attacks. In this paper, ...
research
03/10/2020

SAD: Saliency-based Defenses Against Adversarial Examples

With the rise in popularity of machine and deep learning models, there i...
research
08/29/2019

Universal, transferable and targeted adversarial attacks

Deep Neural Network has been found vulnerable in many previous works. A ...
research
10/01/2021

Dynamics of targeted ransomware negotiations

In this paper, we consider how the development of targeted ransomware ha...

Please sign up or login with your details

Forgot password? Click here to reset