Learning Transferable Policies for Monocular Reactive MAV Control

08/01/2016
by   Shreyansh Daftry, et al.
0

The ability to transfer knowledge gained in previous tasks into new contexts is one of the most important mechanisms of human learning. Despite this, adapting autonomous behavior to be reused in partially similar settings is still an open problem in current robotics research. In this paper, we take a small step in this direction and propose a generic framework for learning transferable motion policies. Our goal is to solve a learning problem in a target domain by utilizing the training data in a different but related source domain. We present this in the context of an autonomous MAV flight using monocular reactive control, and demonstrate the efficacy of our proposed approach through extensive real-world flight experiments in outdoor cluttered environments.

READ FULL TEXT

page 5

page 7

research
03/05/2020

Bayesian Domain Randomization for Sim-to-Real Transfer

When learning policies for robot control, the real-world data required i...
research
11/24/2014

Vision and Learning for Deliberative Monocular Cluttered Flight

Cameras provide a rich source of information while being passive, cheap ...
research
09/20/2022

Cautious Planning with Incremental Symbolic Perception: Designing Verified Reactive Driving Maneuvers

This work presents a step towards utilizing incrementally-improving symb...
research
05/07/2019

Autonomous Open-Ended Learning of Interdependent Tasks

Autonomy is fundamental for artificial agents acting in complex real-wor...
research
11/09/2021

Towards Active Vision for Action Localization with Reactive Control and Predictive Learning

Visual event perception tasks such as action localization have primarily...
research
05/30/2023

SO(2)-Equivariant Downwash Models for Close Proximity Flight

Multirotors flying in close proximity induce aerodynamic wake effects on...

Please sign up or login with your details

Forgot password? Click here to reset