SEMI: Self-supervised Exploration via Multisensory Incongruity

by   Jianren Wang, et al.

Efficient exploration is a long-standing problem in reinforcement learning. In this work, we introduce a self-supervised exploration policy by incentivizing the agent to maximize multisensory incongruity, which can be measured in two aspects: perception incongruity and action incongruity. The former represents the uncertainty in multisensory fusion model, while the latter represents the uncertainty in an agent's policy. Specifically, an alignment predictor is trained to detect whether multiple sensory inputs are aligned, the error of which is used to measure perception incongruity. The policy takes the multisensory observations with sensory-wise dropout as input and outputs actions for exploration. The variance of actions is further used to measure action incongruity. Our formulation allows the agent to learn skills by exploring in a self-supervised manner without any external rewards. Besides, our method enables the agent to learn a compact multimodal representation from hard examples, which further improves the sample efficiency of our policy learning. We demonstrate the efficacy of this formulation across a variety of benchmark environments including object manipulation and audio-visual games.



There are no comments yet.


page 6


Self-Supervised Exploration via Disagreement

Efficient exploration is a long-standing problem in sensorimotor learnin...

SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency

In this paper, we explore how we can build upon the data and models of I...

Exploring Alignment of Representations with Human Perception

We argue that a valuable perspective on when a model learns good represe...

Active World Model Learning with Progress Curiosity

World models are self-supervised predictive models of how the world evol...

Variational Dynamic for Self-Supervised Exploration in Deep Reinforcement Learning

Efficient exploration remains a challenging problem in reinforcement lea...

Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks

Contact-rich manipulation tasks in unstructured environments often requi...

On the Sensory Commutativity of Action Sequences for Embodied Agents

We study perception in the scenario of an embodied agent equipped with f...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.