Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation

05/23/2021
by   Jinyu Yang, et al.
0

Recent studies imply that deep neural networks are vulnerable to adversarial examples – inputs with a slight but intentional perturbation are incorrectly classified by the network. Such vulnerability makes it risky for some security-related applications (e.g., semantic segmentation in autonomous cars) and triggers tremendous concerns on the model reliability. For the first time, we comprehensively evaluate the robustness of existing UDA methods and propose a robust UDA approach. It is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks. These observations motivate us to propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space. Extensive empirical studies on commonly used benchmarks demonstrate that ASSUDA is resistant to adversarial attacks.

READ FULL TEXT

page 3

page 8

research
11/27/2017

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Deep Neural Networks (DNNs) have been demonstrated to perform exceptiona...
research
05/22/2023

Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation

State-of-the-art deep neural networks have proven to be highly powerful ...
research
02/18/2022

Exploring Adversarially Robust Training for Unsupervised Domain Adaptation

Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge f...
research
08/26/2022

Robust Prototypical Few-Shot Organ Segmentation with Regularized Neural-ODEs

Despite the tremendous progress made by deep learning models in image se...
research
07/17/2017

Houdini: Fooling Deep Structured Prediction Models

Generating adversarial examples is a critical step for evaluating and im...
research
03/10/2021

Learning a Domain-Agnostic Visual Representation for Autonomous Driving via Contrastive Loss

Deep neural networks have been widely studied in autonomous driving appl...
research
12/12/2022

Robust Perception through Equivariance

Deep networks for computer vision are not reliable when they encounter a...

Please sign up or login with your details

Forgot password? Click here to reset