DeepAI AI Chat
Log In Sign Up

Stereoscopic Universal Perturbations across Different Architectures and Datasets

12/12/2021
by   Zachary Berger, et al.
2

We study the effect of adversarial perturbations of images on deep stereo matching networks for the disparity estimation task. We present a method to craft a single set of perturbations that, when added to any stereo image pair in a dataset, can fool a stereo network to significantly alter the perceived scene geometry. Our perturbation images are "universal" in that they not only corrupt estimates of the network on the dataset they are optimized for, but also generalize to stereo networks with different architectures across different datasets. We evaluate our approach on multiple public benchmark datasets and show that our perturbations can increase D1-error (akin to fooling rate) of state-of-the-art stereo networks from 1 investigate the effect of perturbations on the estimated scene geometry and identify object classes that are most vulnerable. Our analysis on the activations of registered points between left and right images led us to find that certain architectural components, i.e. deformable convolution and explicit matching, can increase robustness against adversaries. We demonstrate that by simply designing networks with such components, one can reduce the effect of adversaries by up to 60.5 with costly adversarial data augmentation.

READ FULL TEXT

page 21

page 22

page 23

page 24

page 25

page 26

page 27

page 29

09/21/2020

Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

We study the effect of adversarial perturbations of images on the estima...
10/27/2022

2T-UNET: A Two-Tower UNet with Depth Clues for Robust Stereo Depth Estimation

Stereo correspondence matching is an essential part of the multi-step st...
06/12/2020

Targeted Adversarial Perturbations for Monocular Depth Prediction

We study the effect of adversarial perturbations on the task of monocula...
10/28/2020

Transferable Universal Adversarial Perturbations Using Generative Models

Deep neural networks tend to be vulnerable to adversarial perturbations,...
08/02/2016

A study of the effect of JPG compression on adversarial images

Neural network image classifiers are known to be vulnerable to adversari...
02/03/2023

Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels

Machine learning models are vulnerable to adversarial perturbations, and...