RoCUS: Robot Controller Understanding via Sampling

12/25/2020
by   Yilun Zhou, et al.
0

As robots are deployed in complex situations, engineers and end users must develop a holistic understanding of their capabilities and behaviors. Existing research focuses mainly on factors related to task completion, such as success rate, completion time, or total energy consumption. Other factors like collision avoidance behavior, trajectory smoothness, and motion legibility are equally or more important for safe and trustworthy deployment. While methods exist to analyze these quality factors for individual trajectories or distributions of trajectories, these statistics may be insufficient to develop a mental model of the controller's behaviors, especially uncommon behaviors. We present RoCUS: a Bayesian sampling-based method to find situations that lead to trajectories which exhibit certain behaviors. By analyzing these situations and trajectories, we can gain important insights into the controller that are easily missed in standard task-completion evaluations. On a 2D navigation problem and a 7 degree-of-freedom (DoF) arm reaching problem, we analyze three controllers: a rapidly exploring random tree (RRT) planner, a dynamical system (DS) formulation, and a deep imitation learning (IL) or reinforcement learning (RL) model. We show how RoCUS can uncover insights to further our understanding about them beyond task-completion aspects. The code is available at https://github.com/YilunZhou/RoCUS.

READ FULL TEXT

page 1

page 5

page 6

page 11

page 13

research
01/22/2019

Robust Recovery Controller for a Quadrupedal Robot using Deep Reinforcement Learning

The ability to recover from a fall is an essential feature for a legged ...
research
10/11/2017

PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning

We present PRM-RL, a hierarchical method for long-range navigation task ...
research
08/10/2020

Imitation Learning for Autonomous Trajectory Learning of Robot Arms in Space

This work adds on to the on-going efforts to provide more autonomy to sp...
research
06/18/2019

Sample-efficient Adversarial Imitation Learning from Observation

Imitation from observation is the framework of learning tasks by observi...
research
11/01/2020

A Passive Navigation Planning Algorithm for Collision-free Control of Mobile Robots

Path planning and collision avoidance are challenging in complex and hig...
research
11/09/2020

Trajectory Planning for Autonomous Vehicles Using Hierarchical Reinforcement Learning

Planning safe trajectories under uncertain and dynamic conditions makes ...

Please sign up or login with your details

Forgot password? Click here to reset