Composable Planning with Attributes

by   Amy Zhang, et al.

The tasks that an agent will need to solve often are not known during training. However, if the agent knows which properties of the environment are important then, after learning how its actions affect those properties, it may be able to use this knowledge to solve complex tasks without training specifically for them. Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest. We propose a method that learns a policy for transitioning between "nearby" sets of attributes, and maintains a graph of possible transitions. Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan. We show in 3D block stacking, grid-world games, and StarCraft that our model is able to generalize to longer, more complex tasks at test time by composing simpler learned policies.



page 7


Planning with Arithmetic and Geometric Attributes

A desirable property of an intelligent agent is its ability to understan...

Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control

Autonomous agents situated in real-world environments must be able to ma...

Time Reversal as Self-Supervision

A longstanding challenge in robot learning for manipulation tasks has be...

Rapid Task-Solving in Novel Environments

When thrust into an unfamiliar environment and charged with solving a se...

Long Range Neural Navigation Policies for the Real World

Learned Neural Network based policies have shown promising results for r...

Supervised Off-Policy Ranking

Off-policy evaluation (OPE) leverages data generated by other policies t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.