Learning Trajectory Preferences for Manipulators via Iterative Improvement

06/26/2013
by   Ashesh Jain, et al.
0

We consider the problem of learning good trajectories for manipulation tasks. This is challenging because the criterion defining a good trajectory varies with users, tasks and environments. In this paper, we propose a co-active online learning framework for teaching robots the preferences of its users for object manipulation tasks. The key novelty of our approach lies in the type of feedback expected from the user: the human user does not need to demonstrate optimal trajectories as training data, but merely needs to iteratively provide trajectories that slightly improve over the trajectory currently proposed by the system. We argue that this co-active preference feedback can be more easily elicited from the user than demonstrations of optimal trajectories, which are often challenging and non-intuitive to provide on high degrees of freedom manipulators. Nevertheless, theoretical regret bounds of our algorithm match the asymptotic rates of optimal trajectory algorithms. We demonstrate the generalizability of our algorithm on a variety of grocery checkout tasks, for whom, the preferences were not only influenced by the object being manipulated but also by the surrounding environment.[For more details and a demonstration video, visit: <http://pr.cs.cornell.edu/coactive>]

READ FULL TEXT

page 2

page 4

page 5

page 6

research
01/05/2016

Learning Preferences for Manipulation Tasks from Online Coactive Feedback

We consider the problem of learning preferences over trajectories for mo...
research
06/10/2014

PlanIt: A Crowdsourcing Approach for Learning to Plan Paths from Large Scale Preference Feedback

We consider the problem of learning user preferences over robot trajecto...
research
10/01/2021

Learning Reward Functions from Scale Feedback

Today's robots are increasingly interacting with people and need to effi...
research
09/03/2019

Learning User Preferences for Trajectories from Brain Signals

Robot motions in the presence of humans should not only be feasible and ...
research
04/10/2023

Learning a Universal Human Prior for Dexterous Manipulation from Human Preference

Generating human-like behavior on robots is a great challenge especially...
research
09/07/2023

Learning from Demonstration via Probabilistic Diagrammatic Teaching

Learning for Demonstration (LfD) enables robots to acquire new skills by...
research
02/05/2018

Learning from Richer Human Guidance: Augmenting Comparison-Based Learning with Feature Queries

We focus on learning the desired objective function for a robot. Althoug...

Please sign up or login with your details

Forgot password? Click here to reset