Universal Adversarial Perturbations Against Semantic Image Segmentation

04/19/2017
by   Jan-Hendrik Metzen, et al.
0

While deep learning is remarkably successful on perceptual tasks, it was also shown to be vulnerable to adversarial perturbations of the input. These perturbations denote noise added to the input that was generated specifically to fool the system while being quasi-imperceptible for humans. More severely, there even exist universal perturbations that are input-agnostic but fool the network on the majority of inputs. While recent work has focused on image classification, this work proposes attacks against semantic image segmentation: we present an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output. We show empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs. Furthermore, we also show the existence of universal noise which removes a target class (e.g., all pedestrians) from the segmentation while leaving the segmentation mostly unchanged otherwise.

READ FULL TEXT

page 7

page 11

page 12

page 13

page 14

page 15

page 16

page 17

research
10/10/2022

Universal Adversarial Perturbations: Efficiency on a small image dataset

Although neural networks perform very well on the image classification t...
research
03/03/2017

Adversarial Examples for Semantic Image Segmentation

Machine learning methods in general and Deep Neural Networks in particul...
research
06/28/2020

Geometry-Inspired Top-k Adversarial Perturbations

State-of-the-art deep learning models are untrustworthy due to their vul...
research
06/19/2022

A Universal Adversarial Policy for Text Classifiers

Discovering the existence of universal adversarial perturbations had lar...
research
11/18/2020

Adversarial Turing Patterns from Cellular Automata

State-of-the-art deep classifiers are intriguingly vulnerable to univers...
research
10/28/2020

Transferable Universal Adversarial Perturbations Using Generative Models

Deep neural networks tend to be vulnerable to adversarial perturbations,...
research
04/21/2021

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are input perturbations that ...

Please sign up or login with your details

Forgot password? Click here to reset