SEMI: Self-supervised Exploration via Multisensory Incongruity

09/26/2020
by   Jianren Wang, et al.
12

Efficient exploration is a long-standing problem in reinforcement learning. In this work, we introduce a self-supervised exploration policy by incentivizing the agent to maximize multisensory incongruity, which can be measured in two aspects: perception incongruity and action incongruity. The former represents the uncertainty in multisensory fusion model, while the latter represents the uncertainty in an agent's policy. Specifically, an alignment predictor is trained to detect whether multiple sensory inputs are aligned, the error of which is used to measure perception incongruity. The policy takes the multisensory observations with sensory-wise dropout as input and outputs actions for exploration. The variance of actions is further used to measure action incongruity. Our formulation allows the agent to learn skills by exploring in a self-supervised manner without any external rewards. Besides, our method enables the agent to learn a compact multimodal representation from hard examples, which further improves the sample efficiency of our policy learning. We demonstrate the efficacy of this formulation across a variety of benchmark environments including object manipulation and audio-visual games.

READ FULL TEXT
research
06/10/2019

Self-Supervised Exploration via Disagreement

Efficient exploration is a long-standing problem in sensorimotor learnin...
research
12/02/2021

SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency

In this paper, we explore how we can build upon the data and models of I...
research
03/20/2023

Learning to Explore Informative Trajectories and Samples for Embodied Perception

We are witnessing significant progress on perception models, specificall...
research
11/29/2021

Exploring Alignment of Representations with Human Perception

We argue that a valuable perspective on when a model learns good represe...
research
07/15/2020

Active World Model Learning with Progress Curiosity

World models are self-supervised predictive models of how the world evol...
research
07/28/2019

Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks

Contact-rich manipulation tasks in unstructured environments often requi...
research
03/13/2023

Fast exploration and learning of latent graphs with aliased observations

Consider this scenario: an agent navigates a latent graph by performing ...

Please sign up or login with your details

Forgot password? Click here to reset