Fashion-Guided Adversarial Attack on Person Segmentation

04/17/2021
by   Marc Treu, et al.
7

This paper presents the first adversarial example based method for attacking human instance segmentation networks, namely person segmentation networks in short, which are harder to fool than classification networks. We propose a novel Fashion-Guided Adversarial Attack (FashionAdv) framework to automatically identify attackable regions in the target image to minimize the effect on image quality. It generates adversarial textures learned from fashion style images and then overlays them on the clothing regions in the original image to make all persons in the image invisible to person segmentation networks. The synthesized adversarial textures are inconspicuous and appear natural to the human eye. The effectiveness of the proposed method is enhanced by robustness training and by jointly attacking multiple components of the target network. Extensive experiments demonstrated the effectiveness of FashionAdv in terms of robustness to image manipulations and storage in cyberspace as well as appearing natural to the human eye. The code and data are publicly released on our project page https://github.com/nii-yamagishilab/fashion_adv

READ FULL TEXT

page 1

page 2

page 4

page 6

page 7

research
07/25/2020

MirrorNet: Bio-Inspired Adversarial Attack for Camouflaged Object Segmentation

Camouflaged objects are generally difficult to be detected in their natu...
research
10/05/2022

Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks

Unrestricted color attacks, which manipulate semantically meaningful col...
research
08/03/2022

Multiclass ASMA vs Targeted PGD Attack in Image Segmentation

Deep learning networks have demonstrated high performance in a large var...
research
01/07/2019

Image Super-Resolution as a Defense Against Adversarial Attacks

Convolutional Neural Networks have achieved significant success across m...
research
03/13/2022

LAS-AT: Adversarial Training with Learnable Attack Strategy

Adversarial training (AT) is always formulated as a minimax problem, of ...
research
09/25/2021

Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency

In the evasion attacks against deep neural networks (DNN), the attacker ...
research
09/02/2022

SIAN: Style-Guided Instance-Adaptive Normalization for Multi-Organ Histopathology Image Synthesis

Existing deep networks for histopathology image synthesis cannot generat...

Please sign up or login with your details

Forgot password? Click here to reset