Bidirectional Interaction between Visual and Motor Generative Models using Predictive Coding and Active Inference

by   Louis Annabi, et al.

In this work, we build upon the Active Inference (AIF) and Predictive Coding (PC) frameworks to propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories. We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories. We furthermore inquire the effects of bidirectional interactions between the motor and the visual modules. The architecture is tested on the control of a simulated robotic arm learning to reproduce handwritten letters.


Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network

It is crucial to ask how agents can achieve goals by generating action p...

Neuronal Sequence Models for Bayesian Online Inference

Sequential neuronal activity underlies a wide range of processes in the ...

The Mirrornet : Learning Audio Synthesizer Controls Inspired by Sensorimotor Interaction

Experiments to understand the sensorimotor neural interactions in the hu...

Recursive Neural Programs: Variational Learning of Image Grammars and Part-Whole Hierarchies

Human vision involves parsing and representing objects and scenes using ...

Self-Calibrating Active Binocular Vision via Active Efficient Coding with Deep Autoencoders

We present a model of the self-calibration of active binocular vision co...

AFA-PredNet: The action modulation within predictive coding

The predictive processing (PP) hypothesizes that the predictive inferenc...