Detecting Adversarial Perturbations in Multi-Task Perception

03/02/2022
by   Marvin Klingner, et al.
0

While deep neural networks (DNNs) achieve impressive performance on environment perception tasks, their sensitivity to adversarial perturbations limits their use in practical applications. In this paper, we (i) propose a novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks (i.e., depth estimation and semantic segmentation). Specifically, adversarial perturbations are detected by inconsistencies between extracted edges of the input image, the depth output, and the segmentation output. To further improve this technique, we (ii) develop a novel edge consistency loss between all three modalities, thereby improving their initial consistency which in turn supports our detection scheme. We verify our detection scheme's effectiveness by employing various known attacks and image noises. In addition, we (iii) develop a multi-task adversarial attack, aiming at fooling both tasks as well as our detection scheme. Experimental evaluation on the Cityscapes and KITTI datasets shows that under an assumption of a 5 false positive rate up to 100 adversarially perturbed, depending on the strength of the perturbation. Code will be available on github. A short video at https://youtu.be/KKa6gOyWmH4 provides qualitative results.

READ FULL TEXT

page 1

page 2

page 4

page 6

research
12/06/2017

Generative Adversarial Perturbations

In this paper, we propose novel generative models for creating adversari...
research
04/23/2020

Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation

While current approaches for neural network training often aim at improv...
research
05/29/2021

Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations

Recent researches show that deep learning model is susceptible to backdo...
research
01/24/2018

Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations

Machine learning models are susceptible to adversarial perturbations: sm...
research
08/08/2023

PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant Semantic Segmentation

Infrared and visible image fusion is a powerful technique that combines ...
research
07/19/2020

Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency

There has been a recent surge in research on adversarial perturbations t...
research
08/28/2020

Color and Edge-Aware Adversarial Image Perturbations

Adversarial perturbation of images, in which a source image is deliberat...

Please sign up or login with your details

Forgot password? Click here to reset