Unrestricted Adversarial Attacks for Semantic Segmentation

10/06/2019
by   Guangyu Shen, et al.
23

Semantic segmentation is one of the most impactful applications of machine learning; however, their robustness under adversarial attack is not well studied. In this paper, we focus on generating unrestricted adversarial examples for semantic segmentation models. We demonstrate a simple yet effective method to generate unrestricted adversarial examples using conditional generative adversarial networks (CGAN) without any hand-crafted metric. The naïve implementation of CGAN, however, yields inferior image quality and low attack success rate. Instead, we leverage the SPADE (Spatially-adaptive denormalization) structure with an additional loss item, which is able to generate effective adversarial attacks in a single step. We validate our approach on the well studied Cityscapes and ADE20K datasets, and demonstrate that our synthetic adversarial examples are not only realistic, but also improve the attack success rate by up to 41.0% compared with the state of the art adversarial attack methods including PGD attack.

READ FULL TEXT

page 2

page 7

page 8

page 15

page 16

page 17

page 18

page 19

research
07/09/2018

Adaptive Adversarial Attack on Scene Text Recognition

Recent studies have shown that state-of-the-art deep learning models are...
research
07/17/2017

Houdini: Fooling Deep Structured Prediction Models

Generating adversarial examples is a critical step for evaluating and im...
research
06/14/2022

Proximal Splitting Adversarial Attacks for Semantic Segmentation

Classification has been the focal point of research on adversarial attac...
research
08/14/2019

DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation

Nowadays, Deep learning techniques show dramatic performance on computer...
research
10/13/2021

Identification of Attack-Specific Signatures in Adversarial Examples

The adversarial attack literature contains a myriad of algorithms for cr...
research
05/26/2019

Generalizable Adversarial Attacks Using Generative Models

Adversarial attacks on deep neural networks traditionally rely on a cons...
research
08/03/2021

Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation

In this paper, we tackle the detection of out-of-distribution (OOD) obje...

Please sign up or login with your details

Forgot password? Click here to reset