Learning Object-conditioned Exploration using Distributed Soft Actor Critic

07/29/2020
by   Ayzaan Wahid, et al.
0

Object navigation is defined as navigating to an object of a given label in a complex, unexplored environment. In its general form, this problem poses several challenges for Robotics: semantic exploration of unknown environments in search of an object and low-level control. In this work we study object-guided exploration and low-level control, and present an end-to-end trained navigation policy achieving a success rate of 0.68 and SPL of 0.58 on unseen, visually complex scans of real homes. We propose a highly scalable implementation of an off-policy Reinforcement Learning algorithm, distributed Soft Actor Critic, which allows the system to utilize 98M experience steps in 24 hours on 8 GPUs. Our system learns to control a differential drive mobile base in simulation from a stack of high dimensional observations commonly used on robotic platforms. The learned policy is capable of object-guided exploratory behaviors and low-level control learned from pure experiences in realistic environments.

READ FULL TEXT VIEW PDF

page 2

page 8

10/05/2020

Using Soft Actor-Critic for Low-Level UAV Control

Unmanned Aerial Vehicles (UAVs), or drones, have recently been used in s...
12/20/2021

Variational Quantum Soft Actor-Critic

Quantum computing has a superior advantage in tackling specific problems...
02/08/2021

Adversarially Guided Actor-Critic

Despite definite success in deep reinforcement learning problems, actor-...
10/06/2020

Reinforcement Learning with Random Delays

Action and observation delays commonly occur in many Reinforcement Learn...
09/17/2021

Landmark Policy Optimization for Object Navigation Task

This work studies object goal navigation task, which involves navigating...
06/14/2018

Self-Imitation Learning

This paper proposes Self-Imitation Learning (SIL), a simple off-policy a...
06/25/2022

Guided Exploration in Reinforcement Learning via Monte Carlo Critic Optimization

The class of deep deterministic off-policy algorithms is effectively app...

1 Introduction