TarGF: Learning Target Gradient Field for Object Rearrangement

09/02/2022
by   Mingdong Wu, et al.
1

Object Rearrangement is to move objects from an initial state to a goal state. Here, we focus on a more practical setting in object rearrangement, i.e., rearranging objects from shuffled layouts to a normative target distribution without explicit goal specification. However, it remains challenging for AI agents, as it is hard to describe the target distribution (goal specification) for reward engineering or collect expert trajectories as demonstrations. Hence, it is infeasible to directly employ reinforcement learning or imitation learning algorithms to address the task. This paper aims to search for a policy only with a set of examples from a target distribution instead of a handcrafted reward function. We employ the score-matching objective to train a Target Gradient Field (TarGF), indicating a direction on each object to increase the likelihood of the target distribution. For object rearrangement, the TarGF can be used in two ways: 1) For model-based planning, we can cast the target gradient into a reference control and output actions with a distributed path planner; 2) For model-free reinforcement learning, the TarGF is not only used for estimating the likelihood-change as a reward but also provides suggested actions in residual policy learning. Experimental results in ball rearrangement and room rearrangement demonstrate that our method significantly outperforms the state-of-the-art methods in the quality of the terminal state, the efficiency of the control process, and scalability. The code and demo videos are on our project website.

READ FULL TEXT

page 5

page 7

page 18

page 19

research
11/09/2020

f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Imitation learning is well-suited for robotic tasks where it is difficul...
research
04/07/2022

Imitating, Fast and Slow: Robust learning from demonstrations via decision-time planning

The goal of imitation learning is to mimic expert behavior from demonstr...
research
11/24/2022

Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning

In this work, we consider one-shot imitation learning for object rearran...
research
05/27/2021

Adversarial Intrinsic Motivation for Reinforcement Learning

Learning with an objective to minimize the mismatch with a reference dis...
research
03/01/2023

LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning

Recent methods for imitation learning directly learn a Q-function using ...
research
07/24/2019

Learning Goal-Oriented Visual Dialog Agents: Imitating and Surpassing Analytic Experts

This paper tackles the problem of learning a questioner in the goal-orie...
research
05/12/2021

Acting upon Imagination: when to trust imagined trajectories in model based reinforcement learning

Model based reinforcement learning (MBRL) uses an imperfect model of the...

Please sign up or login with your details

Forgot password? Click here to reset