Option Discovery for Autonomous Generation of Symbolic Knowledge

06/03/2022
by   Gabriele Sartor, et al.
0

In this work we present an empirical study where we demonstrate the possibility of developing an artificial agent that is capable to autonomously explore an experimental scenario. During the exploration, the agent is able to discover and learn interesting options allowing to interact with the environment without any pre-assigned goal, then abstract and re-use the acquired knowledge to solve possible tasks assigned ex-post. We test the system in the so-called Treasure Game domain described in the recent literature and we empirically demonstrate that the discovered options can be abstracted in an probabilistic symbolic planning model (using the PPDDL language), which allowed the agent to generate symbolic plans to achieve extrinsic goals.

READ FULL TEXT
research
12/18/2021

Online Grounding of PDDL Domains by Acting and Sensing in Unknown Environments

To effectively use an abstract (PDDL) planning domain to achieve goals i...
research
02/12/2021

Discovery of Options via Meta-Learned Subgoals

Temporal abstractions in the form of options have been shown to help rei...
research
09/05/2017

Active Exploration for Learning Symbolic Representations

We introduce an online active exploration algorithm for data-efficiently...
research
11/02/2021

Learning to Explore by Reinforcement over High-Level Options

Autonomous 3D environment exploration is a fundamental task for various ...
research
09/11/2020

Physically Embedded Planning Problems: New Challenges for Reinforcement Learning

Recent work in deep reinforcement learning (RL) has produced algorithms ...
research
07/26/2018

Variational Option Discovery Algorithms

We explore methods for option discovery based on variational inference a...
research
09/05/2022

MO2: Model-Based Offline Options

The ability to discover useful behaviours from past experience and trans...

Please sign up or login with your details

Forgot password? Click here to reset