Robotic self-representation improves manipulation skills and transfer learning

by   Phuong D. H. Nguyen, et al.

Cognitive science suggests that the self-representation is critical for learning and problem-solving. However, there is a lack of computational methods that relate this claim to cognitively plausible robots and reinforcement learning. In this paper, we bridge this gap by developing a model that learns bidirectional action-effect associations to encode the representations of body schema and the peripersonal space from multisensory information, which is named multimodal BidAL. Through three different robotic experiments, we demonstrate that this approach significantly stabilizes the learning-based problem-solving under noisy conditions and that it improves transfer learning of robotic manipulation skills.


page 4

page 5


End-to-end Reinforcement Learning of Robotic Manipulation with Robust Keypoints Representation

We present an end-to-end Reinforcement Learning(RL) framework for roboti...

Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems

This paper was motivated by the problem of how to make robots fuse and t...

Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space

Intelligent agents must be able to think fast and slow to perform elabor...

Efficient Bimanual Manipulation Using Learned Task Schemas

We address the problem of effectively composing skills to solve sparse-r...

ELSIM: End-to-end learning of reusable skills through intrinsic motivation

Taking inspiration from developmental learning, we present a novel reinf...

CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning

Despite recent successes of reinforcement learning (RL), it remains a ch...

Invariant Feature Mappings for Generalizing Affordance Understanding Using Regularized Metric Learning

This paper presents an approach for learning invariant features for obje...