Fitted Q-Learning for Relational Domains

06/10/2020
by   Srijita Das, et al.
0

We consider the problem of Approximate Dynamic Programming in relational domains. Inspired by the success of fitted Q-learning methods in propositional settings, we develop the first relational fitted Q-learning algorithms by representing the value function and Bellman residuals. When we fit the Q-functions, we show how the two steps of Bellman operator; application and projection steps can be performed using a gradient-boosting technique. Our proposed framework performs reasonably well on standard domains without using domain models and using fewer training trajectories.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2019

Learning Relational Representations with Auto-encoding Logic Programs

Deep learning methods capable of handling relational data have prolifera...
research
01/16/2014

Automatic Induction of Bellman-Error Features for Probabilistic Planning

Domain-specific features are important in representing problem structure...
research
07/11/2012

Exploiting First-Order Regression in Inductive Policy Selection

We consider the problem of computing optimal generalised policies for re...
research
12/02/2022

A Geometric-Relational Deep Learning Framework for BIM Object Classification

Interoperability issue is a significant problem in Building Information ...
research
01/16/2014

Probabilistic Relational Planning with First Order Decision Diagrams

Dynamic programming algorithms have been successfully applied to proposi...
research
08/01/2020

Relational Algorithms for k-means Clustering

The majority of learning tasks faced by data scientists involve relation...
research
01/16/2023

Clustered Relational Thread-Modular Abstract Interpretation with Local Traces

We construct novel thread-modular analyses that track relational informa...

Please sign up or login with your details

Forgot password? Click here to reset