Neural Networks with Recurrent Generative Feedback

07/17/2020
by   Yujia Huang, et al.
22

Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of the internal generative model and the external environmental. Inspired by this, we enforce consistency in neural networks by incorporating generative recurrent feedback. We instantiate it on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables into existing CNN architectures, making consistent predictions via alternating MAP inference under a Bayesian framework. CNN-F shows considerably better adversarial robustness over regular feedforward CNNs on standard benchmarks. In addition, With higher V4 and IT neural predictivity, CNN-F produces object representations closer to primate vision than conventional CNNs.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 10

page 11

page 16

page 17

research
06/04/2021

Predify: Augmenting deep neural networks with brain-inspired predictive coding dynamics

Deep neural networks excel at image classification, but their performanc...
research
06/23/2020

The principles of adaptation in organisms and machines II: Thermodynamics of the Bayesian brain

This article reviews how organisms learn and recognize the world through...
research
11/17/2020

Extreme Value Preserving Networks

Recent evidence shows that convolutional neural networks (CNNs) are bias...
research
10/06/2021

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

Convolutional Neural Networks (CNNs) have become the de facto gold stand...
research
10/14/2021

Interactive Analysis of CNN Robustness

While convolutional neural networks (CNNs) have found wide adoption as s...
research
08/19/2023

Robust Mixture-of-Expert Training for Convolutional Neural Networks

Sparsely-gated Mixture of Expert (MoE), an emerging deep model architect...
research
06/14/2023

Global-Local Processing in Convolutional Neural Networks

Convolutional Neural Networks (CNNs) have achieved outstanding performan...

Please sign up or login with your details

Forgot password? Click here to reset