ARiADNE: A Reinforcement learning approach using Attention-based Deep Networks for Exploration

01/27/2023
by   Yuhong Cao, et al.
0

In autonomous robot exploration tasks, a mobile robot needs to actively explore and map an unknown environment as fast as possible. Since the environment is being revealed during exploration, the robot needs to frequently re-plan its path online, as new information is acquired by onboard sensors and used to update its partial map. While state-of-the-art exploration planners are frontier- and sampling-based, encouraged by the recent development in deep reinforcement learning (DRL), we propose ARiADNE, an attention-based neural approach to obtain real-time, non-myopic path planning for autonomous exploration. ARiADNE is able to learn dependencies at multiple spatial scales between areas of the agent's partial map, and implicitly predict potential gains associated with exploring those areas. This allows the agent to sequence movement actions that balance the natural trade-off between exploitation/refinement of the map in known areas and exploration of new areas. We experimentally demonstrate that our method outperforms both learning and non-learning state-of-the-art baselines in terms of average trajectory length to complete exploration in hundreds of simplified 2D indoor scenarios. We further validate our approach in high-fidelity Robot Operating System (ROS) simulations, where we consider a real sensor model and a realistic low-level motion controller, toward deployment on real robots.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
03/11/2023

Spatio-Temporal Attention Network for Persistent Monitoring of Multiple Mobile Targets

This work focuses on the persistent monitoring problem, where a set of t...
research
03/09/2023

Intent-based Deep Reinforcement Learning for Multi-agent Informative Path Planning

In multi-agent informative path planning (MAIPP), agents must collective...
research
02/28/2022

Fast and Compute-efficient Sampling-based Local Exploration Planning via Distribution Learning

Exploration is a fundamental problem in robotics. While sampling-based p...
research
09/13/2023

Learning to Explore Indoor Environments using Autonomous Micro Aerial Vehicles

In this paper, we address the challenge of exploring unknown indoor aeri...
research
07/14/2023

Reinforcement Learning with Frontier-Based Exploration via Autonomous Environment

Active Simultaneous Localisation and Mapping (SLAM) is a critical proble...
research
10/06/2016

Towards Cognitive Exploration through Deep Reinforcement Learning for Mobile Robots

Exploration in an unknown environment is the core functionality for mobi...
research
07/31/2023

Bi-Level Image-Guided Ergodic Exploration with Applications to Planetary Rovers

We present a method for image-guided exploration for mobile robotic syst...

Please sign up or login with your details

Forgot password? Click here to reset