Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

10/11/2018
by   Chaowei Xiao, et al.
0

Deep Neural Networks (DNNs) have been widely applied in various recognition tasks. However, recently DNNs have been shown to be vulnerable against adversarial examples, which can mislead DNNs to make arbitrary incorrect predictions. While adversarial examples are well studied in classification tasks, other learning problems may have different properties. For instance, semantic segmentation requires additional components such as dilated convolutions and multiscale processing. In this paper, we aim to characterize adversarial examples based on spatial context information in semantic segmentation. We observe that spatial consistency information can be potentially leveraged to detect adversarial examples robustly even when a strong adaptive attacker has access to the model and detection strategies. We also show that adversarial examples based on attacks considered within the paper barely transfer among models, even though transferability is common in classification. Our observations shed new light on developing adversarial attacks and defenses to better understand the vulnerabilities of DNNs.

READ FULL TEXT
research
11/27/2017

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Deep Neural Networks (DNNs) have been demonstrated to perform exceptiona...
research
06/19/2019

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

Deep neural networks (DNNs) have achieved great success in various appli...
research
05/22/2023

Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation

State-of-the-art deep neural networks have proven to be highly powerful ...
research
12/04/2020

Practical No-box Adversarial Attacks against DNNs

The study of adversarial vulnerabilities of deep neural networks (DNNs) ...
research
10/24/2021

ADC: Adversarial attacks against object Detection that evade Context consistency checks

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversar...
research
01/08/2018

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

Deep Neural Networks (DNNs) have recently been shown to be vulnerable ag...
research
06/02/2019

Adversarial Examples for Edge Detection: They Exist, and They Transfer

Convolutional neural networks have recently advanced the state of the ar...

Please sign up or login with your details

Forgot password? Click here to reset