Learning First-Order Symbolic Planning Representations That Are Grounded

Two main approaches have been developed for learning first-order planning (action) models from unstructured data: combinatorial approaches that yield crisp action schemas from the structure of the state space, and deep learning approaches that produce action schemas from states represented by images. A benefit of the former approach is that the learned action schemas are similar to those that can be written by hand; a benefit of the latter is that the learned representations (predicates) are grounded on the images, and as a result, new instances can be given in terms of images. In this work, we develop a new formulation for learning crisp first-order planning models that are grounded on parsed images, a step to combine the benefits of the two approaches. Parsed images are assumed to be given in a simple O2D language (objects in 2D) that involves a small number of unary and binary predicates like "left", "above", "shape", etc. After learning, new planning instances can be given in terms of pairs of parsed images, one for the initial situation and the other for the goal. Learning and planning experiments are reported for several domains including Blocks, Sokoban, IPC Grid, and Hanoi.

READ FULL TEXT

page 9

page 20

research
09/12/2019

Learning First-Order Symbolic Planning Representations from Plain Graphs

One of the main obstacles for developing flexible AI system is the split...
research
05/23/2021

Learning First-Order Representations for Planning from Black-Box States: New Results

Recently Bonet and Geffner have shown that first-order representations f...
research
10/27/2021

A Preliminary Case Study of Planning With Complex Transitions: Plotting

Plotting is a tile-matching puzzle video game published by Taito in 1989...
research
01/30/2020

STRIPS Action Discovery

The problem of specifying high-level knowledge bases for planning become...
research
10/26/2020

Towards Concept Formation Grounded on Perception and Action of a Mobile Robot

The recognition of objects and, hence, their descriptions must be ground...
research
10/10/2011

Learning Symbolic Models of Stochastic Domains

In this article, we work towards the goal of developing agents that can ...
research
05/28/2019

Learning Portable Representations for High-Level Planning

We present a framework for autonomously learning a portable representati...

Please sign up or login with your details

Forgot password? Click here to reset