Adversarial Examples on Segmentation Models Can be Easy to Transfer

11/22/2021
by   Jindong Gu, et al.
0

Deep neural network-based image classification can be misled by adversarial examples with small and quasi-imperceptible perturbations. Furthermore, the adversarial examples created on one classification model can also fool another different model. The transferability of the adversarial examples has recently attracted a growing interest since it makes black-box attacks on classification models feasible. As an extension of classification, semantic segmentation has also received much attention towards its adversarial robustness. However, the transferability of adversarial examples on segmentation models has not been systematically studied. In this work, we intensively study this topic. First, we explore the overfitting phenomenon of adversarial examples on classification and segmentation models. In contrast to the observation made on classification models that the transferability is limited by overfitting to the source model, we find that the adversarial examples on segmentations do not always overfit the source models. Even when no overfitting is presented, the transferability of adversarial examples is limited. We attribute the limitation to the architectural traits of segmentation models, i.e., multi-scale object recognition. Then, we propose a simple and effective method, dubbed dynamic scaling, to overcome the limitation. The high transferability achieved by our method shows that, in contrast to the observations in previous work, adversarial examples on a segmentation model can be easy to transfer to other segmentation models. Our analysis and proposals are supported by extensive experiments.

READ FULL TEXT
research
07/25/2022

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

Deep neural network-based image classifications are vulnerable to advers...
research
08/04/2018

Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection

Despite the recent advancements in deploying neural networks for image c...
research
02/28/2022

Enhance transferability of adversarial examples with model architecture

Transferability of adversarial examples is of critical importance to lau...
research
04/26/2022

Boosting Adversarial Transferability of MLP-Mixer

The security of models based on new architectures such as MLP-Mixer and ...
research
02/13/2018

Predicting Adversarial Examples with High Confidence

It has been suggested that adversarial examples cause deep learning mode...
research
11/27/2017

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Deep Neural Networks (DNNs) have been demonstrated to perform exceptiona...
research
08/20/2018

Stochastic Combinatorial Ensembles for Defending Against Adversarial Examples

Many deep learning algorithms can be easily fooled with simple adversari...

Please sign up or login with your details

Forgot password? Click here to reset