Eigenpatches – Adversarial Patches from Principal Components

06/19/2023
by   Jens Bayer, et al.
0

Adversarial patches are still a simple yet powerful white box attack that can be used to fool object detectors by suppressing possible detections. The patches of these so-called evasion attacks are computational expensive to produce and require full access to the attacked detector. This paper addresses the problem of computational expensiveness by analyzing 375 generated patches, calculating the principal components of these and show, that linear combinations of the resulting "eigenpatches" can be used to fool object detections successfully.

READ FULL TEXT

page 2

page 5

page 9

research
09/30/2021

You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for Object Detectors

Blind spots or outright deceit can bedevil and deceive machine learning ...
research
10/25/2020

Dynamic Adversarial Patch for Evading Object Detection Models

Recent research shows that neural networks models used for computer visi...
research
12/17/2019

APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection

Physical adversarial attacks threaten to fool object detection systems, ...
research
09/21/2020

Generating Adversarial yet Inconspicuous Patches with a Single Image

Deep neural networks have been shown vulnerable toadversarial patches, w...
research
09/09/2021

Energy Attack: On Transferring Adversarial Examples

In this work we propose Energy Attack, a transfer-based black-box L_∞-ad...
research
07/06/2022

The Weaknesses of Adversarial Camouflage in Overhead Imagery

Machine learning is increasingly critical for analysis of the ever-growi...
research
12/06/2018

Towards Hiding Adversarial Examples from Network Interpretation

Deep networks have been shown to be fooled rather easily using adversari...

Please sign up or login with your details

Forgot password? Click here to reset