Learning to Adapt by Minimizing Discrepancy

11/30/2017
by   Alexander G. Ororbia II, et al.
0

We explore whether useful temporal neural generative models can be learned from sequential data without back-propagation through time. We investigate the viability of a more neurocognitively-grounded approach in the context of unsupervised generative modeling of sequences. Specifically, we build on the concept of predictive coding, which has gained influence in cognitive science, in a neural framework. To do so we develop a novel architecture, the Temporal Neural Coding Network, and its learning algorithm, Discrepancy Reduction. The underlying directed generative model is fully recurrent, meaning that it employs structural feedback connections and temporal feedback connections, yielding information propagation cycles that create local learning signals. This facilitates a unified bottom-up and top-down approach for information transfer inside the architecture. Our proposed algorithm shows promise on the bouncing balls generative modeling problem. Further experiments could be conducted to explore the strengths and weaknesses of our approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2019

Powering Hidden Markov Model by Neural Network based Generative Models

Hidden Markov model (HMM) has been successfully used for sequential data...
research
05/15/2021

Towards a Predictive Processing Implementation of the Common Model of Cognition

In this article, we present a cognitive architecture that is built from ...
research
01/04/2023

The Predictive Forward-Forward Algorithm

We propose the predictive forward-forward (PFF) algorithm for conducting...
research
01/16/2014

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

We marry ideas from deep neural networks and approximate Bayesian infere...
research
02/18/2021

Generative Speech Coding with Predictive Variance Regularization

The recent emergence of machine-learning based generative models for spe...
research
10/22/2018

A neuro-inspired architecture for unsupervised continual learning based on online clustering and hierarchical predictive coding

We propose that the Continual Learning desiderata can be achieved throug...
research
06/04/2023

Top-Down Processing: Top-Down Network Combines Back-Propagation with Attention

Early neural network models relied exclusively on bottom-up processing g...

Please sign up or login with your details

Forgot password? Click here to reset