Adversarial Examples for Semantic Segmentation and Object Detection

03/24/2017
by   Cihang Xie, et al.
0

It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, generally exist for deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the basic target is a pixel or a receptive field in segmentation, and an object proposal in detection), which inspires us to optimize a loss function over a set of pixels/proposals for generating adversarial perturbations. Based on this idea, we propose a novel algorithm named Dense Adversary Generation (DAG), which generates a large family of adversarial examples, and applies to a wide range of state-of-the-art deep networks for segmentation and detection. We also find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transferability across networks with the same architecture is more significant than in other cases. Besides, summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.

READ FULL TEXT

page 1

page 5

page 9

page 10

page 11

page 12

research
03/03/2017

Adversarial Examples for Semantic Image Segmentation

Machine learning methods in general and Deep Neural Networks in particul...
research
06/02/2019

Adversarial Examples for Edge Detection: They Exist, and They Transfer

Convolutional neural networks have recently advanced the state of the ar...
research
12/12/2022

Robust Perception through Equivariance

Deep networks for computer vision are not reliable when they encounter a...
research
08/29/2023

3D Adversarial Augmentations for Robust Out-of-Domain Predictions

Since real-world training datasets cannot properly sample the long tail ...
research
11/22/2018

Distorting Neural Representations to Generate Highly Transferable Adversarial Examples

Deep neural networks (DNN) can be easily fooled by adding human impercep...
research
06/12/2020

D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack

We propose a novel technique that can generate natural-looking adversari...
research
10/31/2017

Generating Natural Adversarial Examples

Due to their complex nature, it is hard to characterize the ways in whic...

Please sign up or login with your details

Forgot password? Click here to reset