Art-Attack: Black-Box Adversarial Attack via Evolutionary Art

03/07/2022
by   Phoenix Williams, et al.
16

Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples. Many works go with a white-box attack that assumes total access to the targeted model including its architecture and gradients. A more realistic assumption is the black-box scenario where an attacker only has access to the targeted model by querying some input and observing its predicted class probabilities. Different from most prevalent black-box attacks that make use of substitute models or gradient estimation, this paper proposes a gradient-free attack by using a concept of evolutionary art to generate adversarial examples that iteratively evolves a set of overlapping transparent shapes. To evaluate the effectiveness of our proposed method, we attack three state-of-the-art image classification models trained on the CIFAR-10 dataset in a targeted manner. We conduct a parameter study outlining the impact the number and type of shapes have on the proposed attack's performance. In comparison to state-of-the-art black-box attacks, our attack is more effective at generating adversarial examples and achieves a higher attack success rate on all three baseline models.

READ FULL TEXT

page 4

page 7

page 8

page 9

page 11

page 12

research
11/03/2018

CAAD 2018: Powerful None-Access Black-Box Attack Based on Adversarial Transformation Network

In this paper, we propose an improvement of Adversarial Transformation N...
research
12/27/2017

Exploring the Space of Black-box Attacks on Deep Neural Networks

Existing black-box attacks on deep neural networks (DNNs) so far have la...
research
12/18/2022

Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks

In this work, we study the black-box targeted attack problem from the mo...
research
12/19/2019

A New Ensemble Method for Concessively Targeted Multi-model Attack

It is well known that deep learning models are vulnerable to adversarial...
research
10/13/2021

Adversarial Attack across Datasets

It has been observed that Deep Neural Networks (DNNs) are vulnerable to ...
research
05/04/2021

Broadly Applicable Targeted Data Sample Omission Attacks

We introduce a novel clean-label targeted poisoning attack on learning m...
research
10/16/2020

Mischief: A Simple Black-Box Attack Against Transformer Architectures

We introduce Mischief, a simple and lightweight method to produce a clas...

Please sign up or login with your details

Forgot password? Click here to reset