Adversarial Patch

by   Tom B. Brown, et al.

We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class.


page 2

page 4

page 5


Enhancing Real-World Adversarial Patches with 3D Modeling Techniques

Although many studies have examined adversarial examples in the real wor...

Random Position Adversarial Patch for Vision Transformers

Previous studies have shown the vulnerability of vision transformers to ...

Towards Hiding Adversarial Examples from Network Interpretation

Deep networks have been shown to be fooled rather easily using adversari...

Rain structure transfer using an exemplar rain image for synthetic rain image generation

This letter proposes a simple method of transferring rain structures of ...

A Little Robustness Goes a Long Way: Leveraging Universal Features for Targeted Transfer Attacks

Adversarial examples for neural network image classifiers are known to b...

Adversarial Training against Location-Optimized Adversarial Patches

Deep neural networks have been shown to be susceptible to adversarial ex...

UPSET and ANGRI : Breaking High Performance Image Classifiers

In this paper, targeted fooling of high performance image classifiers is...

Please sign up or login with your details

Forgot password? Click here to reset