Bottom-Up and Top-Down Reasoning with Hierarchical Rectified Gaussians

07/21/2015
by   Peiyun Hu, et al.
0

Convolutional neural nets (CNNs) have demonstrated remarkable performance in recent history. Such approaches tend to work in a unidirectional bottom-up feed-forward fashion. However, practical experience and biological evidence tells us that feedback plays a crucial role, particularly for detailed spatial understanding tasks. This work explores bidirectional architectures that also reason with top-down feedback: neural units are influenced by both lower and higher-level units. We do so by treating units as rectified latent variables in a quadratic energy function, which can be seen as a hierarchical Rectified Gaussian model (RGs). We show that RGs can be optimized with a quadratic program (QP), that can in turn be optimized with a recurrent neural network (with rectified linear units). This allows RGs to be trained with GPU-optimized gradient descent. From a theoretical perspective, RGs help establish a connection between CNNs and hierarchical probabilistic models. From a practical perspective, RGs are well suited for detailed spatial tasks that can benefit from top-down reasoning. We illustrate them on the challenging task of keypoint localization under occlusions, where local bottom-up evidence may be misleading. We demonstrate state-of-the-art results on challenging benchmarks.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

research
11/21/2017

Deep Sparse Coding for Invariant Multimodal Halle Berry Neurons

Deep feed-forward convolutional neural networks (CNNs) have become ubiqu...
research
06/20/2018

Task-Driven Convolutional Recurrent Models of the Visual System

Feed-forward convolutional neural networks (CNNs) are currently state-of...
research
12/24/2019

FHDR: HDR Image Reconstruction from a Single LDR Image using Feedback Network

High dynamic range (HDR) image generation from a single exposure low dyn...
research
11/29/2021

Recurrent Vision Transformer for Solving Visual Reasoning Problems

Although convolutional neural networks (CNNs) showed remarkable results ...
research
06/21/2020

Hierarchical Reinforcement Learning for Deep Goal Reasoning: An Expressiveness Analysis

Hierarchical DQN (h-DQN) is a two-level architecture of feedforward neur...
research
01/25/2019

A Neurally-Inspired Hierarchical Prediction Network for Spatiotemporal Sequence Learning and Prediction

In this paper we developed a hierarchical network model, called Hierarch...
research
10/23/2017

Feedback-prop: Convolutional Neural Network Inference under Partial Evidence

In this paper, we propose an inference procedure for deep convolutional ...

Please sign up or login with your details

Forgot password? Click here to reset