Learning to Search in Task and Motion Planning with Streams

by   Mohamed Khodeir, et al.

Task and motion planning problems in robotics typically combine symbolic planning over discrete task variables with motion optimization over continuous state and action variables, resulting in trajectories that satisfy the logical constraints imposed on the task variables. Symbolic planning can scale exponentially with the number of task variables, so recent works such as PDDLStream have focused on optimistic planning with an incrementally growing set of objects and facts until a feasible trajectory is found. However, this set is exhaustively and uniformly expanded in a breadth-first manner, regardless of the geometric structure of the problem at hand, which makes long-horizon reasoning with large numbers of objects prohibitively time-consuming. To address this issue, we propose a geometrically informed symbolic planner that expands the set of objects and facts in a best-first manner, prioritized by a Graph Neural Network based score that is learned from prior search computations. We evaluate our approach on a diverse set of problems and demonstrate an improved ability to plan in large or difficult scenarios. We also apply our algorithm on a 7DOF robotic arm in several block-stacking manipulation tasks.


page 1

page 5


Task and Motion Informed Trees (TMIT*): Almost-Surely Asymptotically Optimal Integrated Task and Motion Planning

High-level autonomy requires discrete and continuous reasoning to decide...

Planning with Learned Object Importance in Large Problem Instances using Graph Neural Networks

Real-world planning problems often involve hundreds or even thousands of...

A Conflict-driven Interface between Symbolic Planning and Nonlinear Constraint Solving

Robotic planning in real-world scenarios typically requires joint optimi...

Learning Feasibility of Factored Nonlinear Programs in Robotic Manipulation Planning

A factored Nonlinear Program (Factored-NLP) explicitly models the depend...

Spatial Reasoning via Deep Vision Models for Robotic Sequential Manipulation

In this paper, we propose using deep neural architectures (i.e., vision ...

Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from an Initial Scene Image

In this paper, we propose a deep convolutional recurrent neural network ...

Optimal task and motion planning and execution for human-robot multi-agent systems in dynamic environments

Combining symbolic and geometric reasoning in multi-agent systems is a c...

Please sign up or login with your details

Forgot password? Click here to reset