Task-Driven Data Augmentation for Vision-Based Robotic Control

04/13/2022
by   Shubhankar Agarwal, et al.
0

Today's robots often interface data-driven perception and planning models with classical model-based controllers. For example, drones often use computer vision models to estimate navigation waypoints that are tracked by model predictive control (MPC). Often, such learned perception/planning models produce erroneous waypoint predictions on out-of-distribution (OoD) or even adversarial visual inputs, which increase control cost. However, today's methods to train robust perception models are largely task-agnostic - they augment a dataset using random image transformations or adversarial examples targeted at the vision model in isolation. As such, they often introduce pixel perturbations that are ultimately benign for control, while missing those that are most adversarial. In contrast to prior work that synthesizes adversarial examples for single-step vision tasks, our key contribution is to efficiently synthesize adversarial scenarios for multi-step, model-based control. To do so, we leverage differentiable MPC methods to calculate the sensitivity of a model-based controller to errors in state estimation, which in turn guides how we synthesize adversarial inputs. We show that re-training vision models on these adversarial datasets improves control performance on OoD test scenarios by up to 28.2 is tested on examples of robotic navigation and vision-based control of an autonomous air vehicle.

READ FULL TEXT

page 1

page 7

page 8

page 11

page 12

research
02/07/2019

Modeling and Control of Soft Robots Using the Koopman Operator and Model Predictive Control

Controlling soft robots with precision is a challenge due in large part ...
research
01/07/2020

Aggressive Perception-Aware Navigation using Deep Optical Flow Dynamics and PixelMPC

Recently, vision-based control has gained traction by leveraging the pow...
research
04/17/2020

Approximate Inverse Reinforcement Learning from Vision-based Imitation Learning

In this work, we present a method for obtaining an implicit objective fu...
research
01/25/2022

Tracking and Planning with Spatial World Models

We introduce a method for real-time navigation and tracking with differe...
research
07/13/2022

Dynamic Selection of Perception Models for Robotic Control

Robotic perception models, such as Deep Neural Networks (DNNs), are beco...
research
02/26/2023

Autonomous Intelligent Navigation for Flexible Endoscopy Using Monocular Depth Guidance and 3-D Shape Planning

Recent advancements toward perception and decision-making of flexible en...
research
04/26/2022

Designing Perceptual Puzzles by Differentiating Probabilistic Programs

We design new visual illusions by finding "adversarial examples" for pri...

Please sign up or login with your details

Forgot password? Click here to reset