Touch-based Curiosity for Sparse-Reward Tasks

04/01/2021
by   Sai Rajeswar, et al.
12

Robots in many real-world settings have access to force/torque sensors in their gripper and tactile sensing is often necessary in tasks that involve contact-rich motion. In this work, we leverage surprise from mismatches in touch feedback to guide exploration in hard sparse-reward reinforcement learning tasks. Our approach, Touch-based Curiosity (ToC), learns what visible objects interactions are supposed to "feel" like. We encourage exploration by rewarding interactions where the expectation and the experience don't match. In our proposed method, an initial task-independent exploration phase is followed by an on-task learning phase, in which the original interactions are relabeled with on-task rewards. We test our approach on a range of touch-intensive robot arm tasks (e.g. pushing objects, opening doors), which we also release as part of this work. Across multiple experiments in a simulated setting, we demonstrate that our method is able to learn these difficult tasks through sparse reward and curiosity alone. We compare our cross-modal approach to single-modality (touch- or vision-only) approaches as well as other curiosity-based methods and find that our method performs better and is more sample-efficient.

READ FULL TEXT

page 4

page 5

research
06/09/2019

Curiosity-Driven Multi-Criteria Hindsight Experience Replay

Dealing with sparse rewards is a longstanding challenge in reinforcement...
research
03/20/2019

Learning Gentle Object Manipulation with Curiosity-Driven Deep Reinforcement Learning

Robots must know how to be gentle when they need to interact with fragil...
research
02/22/2021

Improved Learning of Robot Manipulation Tasks via Tactile Intrinsic Motivation

In this paper we address the challenge of exploration in deep reinforcem...
research
02/01/2019

Competitive Experience Replay

Deep learning has achieved remarkable successes in solving challenging r...
research
06/29/2020

Supervised Learning and Reinforcement Learning of Feedback Models for Reactive Behaviors: Tactile Feedback Testbed

Robots need to be able to adapt to unexpected changes in the environment...
research
02/17/2022

Multi-Modal Fusion in Contact-Rich Precise Tasks via Hierarchical Policy Learning

Combined visual and force feedback play an essential role in contact-ric...
research
03/02/2022

InsertionNet 2.0: Minimal Contact Multi-Step Insertion Using Multimodal Multiview Sensory Input

We address the problem of devising the means for a robot to rapidly and ...

Please sign up or login with your details

Forgot password? Click here to reset