Learning Perception-Aware Agile Flight in Cluttered Environments

10/04/2022
by   Yunlong Song, et al.
3

Recently, neural control policies have outperformed existing model-based planning-and-control methods for autonomously navigating quadrotors through cluttered environments in minimum time. However, they are not perception aware, a crucial requirement in vision-based navigation due to the camera's limited field of view and the underactuated nature of a quadrotor. We propose a method to learn neural network policies that achieve perception-aware, minimum-time flight in cluttered environments. Our method combines imitation learning and reinforcement learning (RL) by leveraging a privileged learning-by-cheating framework. Using RL, we first train a perception-aware teacher policy with full-state information to fly in minimum time through cluttered environments. Then, we use imitation learning to distill its knowledge into a vision-based student policy that only perceives the environment via a camera. Our approach tightly couples perception and control, showing a significant advantage in computation speed (10x faster) and success rate. We demonstrate the closed-loop control performance using a physical quadrotor and hardware-in-the-loop simulation at speeds up to 50km/h.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
09/29/2022

A Benchmark Comparison of Imitation Learning-based Control Policies for Autonomous Racing

Autonomous racing with scaled race cars has gained increasing attention ...
research
04/17/2020

Approximate Inverse Reinforcement Learning from Vision-based Imitation Learning

In this work, we present a method for obtaining an implicit objective fu...
research
07/12/2023

Agilicious: Open-Source and Open-Hardware Agile Quadrotor for Vision-Based Flight

Autonomous, agile quadrotor flight raises fundamental challenges for rob...
research
05/28/2020

Perception-aware time optimal path parameterization for quadrotors

The increasing popularity of quadrotors has given rise to a class of pre...
research
12/10/2018

Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

We present Vision-based Navigation with Language-based Assistance (VNLA)...
research
05/27/2021

Robust Navigation for Racing Drones based on Imitation Learning and Modularization

This paper presents a vision-based modularized drone racing navigation s...
research
06/15/2021

Causal Navigation by Continuous-time Neural Networks

Imitation learning enables high-fidelity, vision-based learning of polic...

Please sign up or login with your details

Forgot password? Click here to reset