Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation

05/22/2023
by   Kira Maag, et al.
0

State-of-the-art deep neural networks have proven to be highly powerful in a broad range of tasks, including semantic image segmentation. However, these networks are vulnerable against adversarial attacks, i.e., non-perceptible perturbations added to the input image causing incorrect predictions, which is hazardous in safety-critical applications like automated driving. Adversarial examples and defense strategies are well studied for the image classification task, while there has been limited research in the context of semantic segmentation. First works however show that the segmentation outcome can be severely distorted by adversarial attacks. In this work, we introduce an uncertainty-based method for the detection of adversarial attacks in semantic segmentation. We observe that uncertainty as for example captured by the entropy of the output distribution behaves differently on clean and perturbed images using this property to distinguish between the two cases. Our method works in a light-weight and post-processing manner, i.e., we do not modify the model or need knowledge of the process used for generating adversarial examples. In a thorough empirical analysis, we demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.

READ FULL TEXT

page 1

page 4

page 9

page 10

page 11

page 12

page 13

page 14

research
10/11/2018

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Deep Neural Networks (DNNs) have been widely applied in various recognit...
research
06/26/2019

Defending Adversarial Attacks by Correcting logits

Generating and eliminating adversarial examples has been an intriguing t...
research
08/03/2021

Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation

In this paper, we tackle the detection of out-of-distribution (OOD) obje...
research
05/23/2021

Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation

Recent studies imply that deep neural networks are vulnerable to adversa...
research
05/21/2022

On the Feasibility and Generality of Patch-based Adversarial Attacks on Semantic Segmentation Problems

Deep neural networks were applied with success in a myriad of applicatio...
research
11/21/2020

Spatially Correlated Patterns in Adversarial Images

Adversarial attacks have proved to be the major impediment in the progre...
research
11/27/2017

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Deep Neural Networks (DNNs) have been demonstrated to perform exceptiona...

Please sign up or login with your details

Forgot password? Click here to reset