On the Robustness of Semantic Segmentation Models to Adversarial Attacks

11/27/2017
by   Anurag Arnab, et al.
0

Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.

READ FULL TEXT

page 2

page 14

page 15

page 16

page 23

page 24

page 26

page 27

research
10/11/2018

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Deep Neural Networks (DNNs) have been widely applied in various recognit...
research
05/23/2021

Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation

Recent studies imply that deep neural networks are vulnerable to adversa...
research
05/22/2023

Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation

State-of-the-art deep neural networks have proven to be highly powerful ...
research
11/22/2021

Adversarial Examples on Segmentation Models Can be Easy to Transfer

Deep neural network-based image classification can be misled by adversar...
research
05/18/2023

How Deep Learning Sees the World: A Survey on Adversarial Attacks Defenses

Deep Learning is currently used to perform multiple tasks, such as objec...
research
07/17/2017

Houdini: Fooling Deep Structured Prediction Models

Generating adversarial examples is a critical step for evaluating and im...
research
08/23/2021

SegMix: Co-occurrence Driven Mixup for Semantic Segmentation and Adversarial Robustness

In this paper, we present a strategy for training convolutional neural n...

Please sign up or login with your details

Forgot password? Click here to reset