Memory-based Deep Reinforcement Learning for Obstacle Avoidance in UAV with Limited Environment Knowledge

11/08/2018
by   Abhik Singla, et al.
0

This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challenges because the UAV motion is no more constrained to a well-defined indoor ground or street environment. Horizontal structures in indoor and outdoor environments like decorative items, furnishings, ceiling fans, sign-boards, tree branches etc., also become relevant obstacles unlike those for ground vehicular robots. Thus, methods of obstacle avoidance developed for ground robots are clearly inadequate for UAV navigation. Current control methods using monocular images for UAV obstacle avoidance are heavily dependent on environment information. These controllers do not fully retain and utilize the extensively available information about the ambient environment for decision making. We propose a deep reinforcement learning based method for UAV obstacle avoidance (OA) and autonomous exploration which is capable of doing exactly the same. The crucial idea in our method is the concept of partial observability and how UAVs can retain relevant information about the environment structure to make better future navigation decisions. Our OA technique uses recurrent neural networks with temporal attention and provides better results compared to prior works in terms of distance covered during navigation without collisions. In addition, our technique has a high inference rate (a key factor in robotic applications) and is energy-efficient as it minimizes oscillatory motion of UAV and reduces power wastage.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

page 10

research
04/07/2023

UAV Obstacle Avoidance by Human-in-the-Loop Reinforcement in Arbitrary 3D Environment

This paper focuses on the continuous control of the unmanned aerial vehi...
research
02/10/2020

Autonomous quadrotor obstacle avoidance based on dueling double deep recurrent Q network with monocular vision

The fast developing of unmanned aerial vehicle(UAV) brings forward the h...
research
09/14/2022

Distributed Multi-Robot Obstacle Avoidance via Logarithmic Map-based Deep Reinforcement Learning

Developing a safe, stable, and efficient obstacle avoidance policy in cr...
research
03/11/2021

A Vision Based Deep Reinforcement Learning Algorithm for UAV Obstacle Avoidance

Integration of reinforcement learning with unmanned aerial vehicles (UAV...
research
02/10/2020

Autonomous quadrotor obstacle avoidance based on dueling double deep recurrent Q-learning with monocular vision

The rapid development of unmanned aerial vehicles (UAV) puts forward a h...
research
08/26/2022

Robust and Efficient Depth-based Obstacle Avoidance for Autonomous Miniaturized UAVs

Nano-size drones hold enormous potential to explore unknown and complex ...
research
09/01/2022

Monocular Camera-based Complex Obstacle Avoidance via Efficient Deep Reinforcement Learning

Deep reinforcement learning has achieved great success in laser-based co...

Please sign up or login with your details

Forgot password? Click here to reset