End-to-End Pixel-Based Deep Active Inference for Body Perception and Action

by   Cansu Sancaktar, et al.

We present a pixel-based deep Active Inference algorithm (PixelAI) inspired in human body perception and successfully validated in robot body perception and action as a use case. Our algorithm combines the free energy principle from neuroscience, rooted in variational inference, with deep convolutional decoders to scale the algorithm to directly deal with images input and provide online adaptive inference. The approach enables the robot to perform 1) dynamical body estimation of arm using only raw monocular camera images and 2) autonomous reaching to "imagined" arm poses in the visual space. We statistically analyzed the algorithm performance in a simulated and a real Nao robot. Results show how the same algorithm deals with both perception an action, modelled as an inference optimization problem.


Active inference body perception and action for humanoid robots

One of the biggest challenges in robotics systems is interacting under u...

A deep active inference model of the rubber-hand illusion

Understanding how perception and action deal with sensorimotor conflicts...

A performance contextualization approach to validating camera models for robot simulation

The focus of this contribution is on camera simulation as it comes into ...

Sensorimotor Visual Perception on Embodied System Using Free Energy Principle

We propose an embodied system based on the free energy principle (FEP) f...

Active Perception with Neural Networks

Active perception has been employed in many domains, particularly in the...

Multimodal VAE Active Inference Controller

Active inference, a theoretical construct inspired by brain processing, ...

A Brain Inspired Learning Algorithm for the Perception of a Quadrotor in Wind

The quest for a brain-inspired learning algorithm for robots has culmina...