Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

09/21/2020
by   Alex Wong, et al.
23

We study the effect of adversarial perturbations of images on the estimates of disparity by deep learning models trained for stereo. We show that imperceptible additive perturbations can significantly alter the disparity map, and correspondingly the perceived geometry of the scene. These perturbations not only affect the specific model they are crafted for, but transfer to models with different architecture, trained with different loss functions. We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust, without sacrificing overall accuracy of the model. This is unlike what has been observed in image classification, where adding the perturbed images to the training set makes the model less vulnerable to adversarial perturbations, but to the detriment of overall accuracy. We test our method using the most recent stereo networks and evaluate their performance on public benchmark datasets.

READ FULL TEXT

page 17

page 18

page 19

page 21

page 23

page 24

page 25

page 26

research
12/12/2021

Stereoscopic Universal Perturbations across Different Architectures and Datasets

We study the effect of adversarial perturbations of images on deep stere...
research
01/17/2022

Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations

A limited amount of studies investigates the role of model-agnostic adve...
research
11/18/2018

DeepConsensus: using the consensus of features from multiple layers to attain robust image classification

We consider a classifier whose test set is exposed to various perturbati...
research
03/03/2019

A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations

The linear and non-flexible nature of deep convolutional models makes th...
research
06/19/2018

Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

Designing models that are robust to small adversarial perturbations of t...
research
12/04/2018

Adversarial Example Decomposition

Research has shown that widely used deep neural networks are vulnerable ...
research
07/28/2020

Cassandra: Detecting Trojaned Networks from Adversarial Perturbations

Deep neural networks are being widely deployed for many critical tasks d...

Please sign up or login with your details

Forgot password? Click here to reset