LaVAN: Localized and Visible Adversarial Noise

01/08/2018
by   Danny Karmon, et al.
0

Most works on adversarial examples for deep-learning based image classifiers use noise that, while small, covers the entire image. We explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2 of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 6

research
09/10/2019

Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification

Today's state-of-the-art image classifiers fail to correctly classify ca...
research
02/10/2021

Detecting Localized Adversarial Examples: A Generic Approach using Critical Region Analysis

Deep neural networks (DNNs) have been applied in a wide range of applica...
research
12/05/2019

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

Recent research has shown that Deep Neural Networks (DNNs) for image cla...
research
11/26/2020

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Physical adversarial examples for camera-based computer vision have so f...
research
12/23/2018

Countermeasures Against L0 Adversarial Examples Using Image Processing and Siamese Networks

Despite the great achievements made by neural networks on tasks such as ...
research
11/14/2017

Capturing Localized Image Artifacts through a CNN-based Hyper-image Representation

Training deep CNNs to capture localized image artifacts on a relatively ...

Please sign up or login with your details

Forgot password? Click here to reset