A Critical Investigation of Deep Reinforcement Learning for Navigation

02/07/2018
by   Vikas Dhiman, et al.
0

The navigation problem is classically approached in two steps: an exploration step, where map-information about the environment is gathered; and an exploitation step, where this information is used to navigate efficiently. Deep reinforcement learning (DRL) algorithms, alternatively, approach the problem of navigation in an end-to-end fashion. Inspired by the classical approach, we ask whether DRL algorithms are able to inherently explore, gather and exploit map-information over the course of navigation. We build upon Mirowski et al. [2017] work and introduce a systematic suite of experiments that vary three parameters: the agent's starting location, the agent's target location, and the maze structure. We choose evaluation metrics that explicitly measure the algorithm's ability to gather and exploit map-information. Our experiments show that when trained and tested on the same maps, the algorithm successfully gathers and exploits map-information. However, when trained and tested on different sets of maps, the algorithm fails to transfer the ability to gather and exploit map-information to unseen maps. Furthermore, we find that when the goal location is randomized and the map is kept static, the algorithm is able to gather and exploit map-information but the exploitation is far from optimal. We open-source our experimental suite in the hopes that it serves as a framework for the comparison of future algorithms and leads to the discovery of robust alternatives to classical navigation methods.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
05/04/2021

Deep Reinforcement Learning for Adaptive Exploration of Unknown Environments

Performing autonomous exploration is essential for unmanned aerial vehic...
research
04/02/2018

Curiosity-driven Exploration for Mapless Navigation with Deep Reinforcement Learning

This paper investigates exploration strategies of Deep Reinforcement Lea...
research
11/20/2017

Teaching a Machine to Read Maps with Deep Reinforcement Learning

The ability to use a 2D map to navigate a complex 3D environment is quit...
research
09/20/2019

Learning Your Way Without Map or Compass: Panoramic Target Driven Visual Navigation

We present a robot navigation system that uses an imitation learning fra...
research
07/13/2021

Teaching Agents how to Map: Spatial Reasoning for Multi-Object Navigation

In the context of visual navigation, the capacity to map a novel environ...
research
08/18/2019

VUSFA:Variational Universal Successor Features Approximator to Improve Transfer DRL for Target Driven Visual Navigation

In this paper, we show how novel transfer reinforcement learning techniq...
research
06/08/2023

Local Map-Based DQN Navigation and a Transferability Metric Using Scene Similarity

Autonomous navigation in unknown environments without a global map is a ...

Please sign up or login with your details

Forgot password? Click here to reset