Robust Perception through Analysis by Synthesis

05/23/2018
by   Lukas Schott, et al.
0

The intriguing susceptibility of deep neural networks to minimal input perturbations suggests that the gap between human and machine perception is still large. We here argue that despite much effort, even on MNIST the most successful defenses are still far away from the robustness of human perception. We here reconsider MNIST and establish a novel defense that is inspired by the abundant feedback connections present in the human visual cortex. We suggest that this feedback plays a role in estimating the likelihood of a sensory stimulus with respect to the hidden causes inferred by the cortex and allow the brain to mute distracting patterns. We implement this analysis by synthesis idea in the form of a discriminatively fine-tuned Bayesian classifier using a set of class-conditional variational autoe ncoders (VAEs). To evaluate model robustness we go to great length to find maximally effective adversarial attacks, including decision-based, score-based and gradient-based attacks. The results suggest that this ansatz yields state-of-the-art robustness on MNIST against L0, L2 and L infinity perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.

READ FULL TEXT

page 7

page 8

research
02/28/2019

Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

Deep neural networks have been widely deployed in various machine learni...
research
11/12/2017

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Recently, researchers have discovered that the state-of-the-art object c...
research
01/09/2023

3D Shape Perception Integrates Intuitive Physics and Analysis-by-Synthesis

Many surface cues support three-dimensional shape perception, but people...
research
08/09/2019

On the Adversarial Robustness of Neural Networks without Weight Transport

Neural networks trained with backpropagation, the standard algorithm of ...
research
03/27/2017

Biologically inspired protection of deep networks from adversarial attacks

Inspired by biophysical principles underlying nonlinear dendritic comput...
research
10/08/2021

Robustness Evaluation of Transformer-based Form Field Extractors via Form Attacks

We propose a novel framework to evaluate the robustness of transformer-b...
research
09/02/2020

Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation

Adversarial examples have shown that albeit highly accurate, models lear...

Please sign up or login with your details

Forgot password? Click here to reset