Generate (non-software) Bugs to Fool Classifiers

11/20/2019
by   Hiromu Yakura, et al.
17

In adversarial attacks intended to confound deep learning models, most studies have focused on limiting the magnitude of the modification so that humans do not notice the attack. On the other hand, during an attack against autonomous cars, for example, most drivers would not find it strange if a small insect image were placed on a stop sign, or they may overlook it. In this paper, we present a systematic approach to generate natural adversarial examples against classification models by employing such natural-appearing perturbations that imitate a certain object or signal. We first show the feasibility of this approach in an attack against an image classifier by employing generative adversarial networks that produce image patches that have the appearance of a natural object to fool the target model. We also introduce an algorithm to optimize placement of the perturbation in accordance with the input image, which makes the generation of adversarial examples fast and likely to succeed. Moreover, we experimentally show that the proposed approach can be extended to the audio domain, for example, to generate perturbations that sound like the chirping of birds to fool a speech classifier.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 8

page 9

research
03/19/2020

Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates

To deflect adversarial attacks, a range of "certified" classifiers have ...
research
04/17/2019

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

Deep neural networks have been shown to exhibit an intriguing vulnerabil...
research
08/01/2023

Kidnapping Deep Learning-based Multirotors using Optimized Flying Adversarial Patches

Autonomous flying robots, such as multirotors, often rely on deep learni...
research
05/22/2023

Flying Adversarial Patches: Manipulating the Behavior of Deep Learning-based Autonomous Multirotors

Autonomous flying robots, e.g. multirotors, often rely on a neural netwo...
research
06/09/2020

GAP++: Learning to generate target-conditioned adversarial examples

Adversarial examples are perturbed inputs which can cause a serious thre...
research
05/19/2022

On Trace of PGD-Like Adversarial Attacks

Adversarial attacks pose safety and security concerns for deep learning ...
research
08/02/2016

A study of the effect of JPG compression on adversarial images

Neural network image classifiers are known to be vulnerable to adversari...

Please sign up or login with your details

Forgot password? Click here to reset