SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

07/25/2022
by   Jindong Gu, et al.
0

Deep neural network-based image classifications are vulnerable to adversarial perturbations. The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. As one of the most effective defense strategies, adversarial training was proposed to address the vulnerability of classification models, where the adversarial examples are created and injected into training data during training. The attack and defense of classification models have been intensively studied in past years. Semantic segmentation, as an extension of classifications, has also received great attention recently. Recent work shows a large number of attack iterations are required to create effective adversarial examples to fool segmentation models. The observation makes both robustness evaluation and adversarial training on segmentation models challenging. In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD. Besides, we provide a convergence analysis to show the proposed SegPGD can create more effective adversarial examples than PGD under the same number of attack iterations. Furthermore, we propose to apply our SegPGD as the underlying attack method for segmentation adversarial training. Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models. Our proposals are also verified with experiments on popular Segmentation model architectures and standard segmentation datasets.

READ FULL TEXT
research
05/15/2019

On Norm-Agnostic Robustness of Adversarial Training

Adversarial examples are carefully perturbed in-puts for fooling machine...
research
11/22/2021

Adversarial Examples on Segmentation Models Can be Easy to Transfer

Deep neural network-based image classification can be misled by adversar...
research
06/25/2023

On Evaluating the Adversarial Robustness of Semantic Segmentation Models

Achieving robustness against adversarial input perturbation is an import...
research
10/15/2020

A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning

Although deep convolutional neural networks (CNNs) have demonstrated rem...
research
10/28/2022

Improving Hyperspectral Adversarial Robustness using Ensemble Networks in the Presences of Multiple Attacks

Semantic segmentation of hyperspectral images (HSI) has seen great strid...
research
11/11/2022

On the robustness of non-intrusive speech quality model by adversarial examples

It has been shown recently that deep learning based models are effective...
research
12/02/2020

From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation

Despite recent advancements, deep neural networks are not robust against...

Please sign up or login with your details

Forgot password? Click here to reset