Fitted Q-Learning for Relational Domains

We consider the problem of Approximate Dynamic Programming in relational domains. Inspired by the success of fitted Q-learning methods in propositional settings, we develop the first relational fitted Q-learning algorithms by representing the value function and Bellman residuals. When we fit the Q-functions, we show how the two steps of Bellman operator; application and projection steps can be performed using a gradient-boosting technique. Our proposed framework performs reasonably well on standard domains without using domain models and using fewer training trajectories.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/29/2019

Learning Relational Representations with Auto-encoding Logic Programs

Deep learning methods capable of handling relational data have prolifera...
01/16/2014

Automatic Induction of Bellman-Error Features for Probabilistic Planning

Domain-specific features are important in representing problem structure...
07/11/2012

Exploiting First-Order Regression in Inductive Policy Selection

We consider the problem of computing optimal generalised policies for re...
06/22/2021

Lifted Model Checking for Relational MDPs

Model checking has been developed for verifying the behaviour of systems...
01/16/2014

Probabilistic Relational Planning with First Order Decision Diagrams

Dynamic programming algorithms have been successfully applied to proposi...
08/01/2020

Relational Algorithms for k-means Clustering

The majority of learning tasks faced by data scientists involve relation...
09/11/2019

LazyBum: Decision tree learning using lazy propositionalization

Propositionalization is the process of summarizing relational data into ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Value function approximation in Reinforcement Learning (RL) has long been viewed using the lens of feature discovery 

[29]. A set of classical approaches for this problem based on Approximate Dynamic Programming (ADP) is the fitted value iteration algorithm  [5, 12, 32]

, a batch mode approximation scheme that employs function approximators in each iteration to represent the value estimates. Another popular class of methods that address this problem is Bellman error based methods 

[24, 19, 29]. The key intuition is that the Bellman error has positive dot product with the true value function and, thus, adding basis functions based upon the Bellman error can lead to a good approximation to the true value function.

Here, we consider relational domains that are typically described using parameterized state-action spaces. While it is conceivable to instantiate each object and construct a grounded MDP, given the variable number of objects, this can yield substantially large state-action spaces that render solving the grounded MDP intractable in practice. On the other hand, as noted by Tadepalli et al. tadepalli2004relational, typical function approximators do not generalize well when applied to relational domains. Consequently, a class of RL methods, called Relational Markov Decision Process (RMDPs), have been developed for learning optimal behaviour in these worlds, and methods that directly learn and reason at the first-order level have been developed in the broad area of Relational Reinforcement Learning (RRL) 

[30, 33, 39, 36, 41]. While specific methodologies differ, most of these methods operate at a symbolic level and define the mathematical operations on these symbols to learn the values of the parameterized states (which can essentially be viewed as a “group” of states in classical RL). Most of these methods, however, are exact methods. A notable exception is by Guestrin et al. guestrin2003generalizing, who developed approximate learning for RMDPs by representing and learning value functions in a stage-wise manner.

Inspired by the success of approximate value function learning for propositional domains, we propose the first set of approximate Q-value function learning methods for relational domains. We take two specific approaches – first is to represent the lifted Q-value functions and the second is to represent the Bellman residuals – both using a set of relational regression trees (RRTs) [3]. A key aspect of our approach is that it is model-free, which most of the RMDP algorithms assume. The only exception is Fern et al. fern2006approximate, who directly learn in policy space. Our work differs from their work in that we directly learn value functions and eventually policies from them and adapt the most recently successful relational gradient-boosting (RFGB) [27], which has been shown to outperform learning relational rules one by one.

Our work can be seen as learning functions over relational domains that permit the efficient combination of these functions without explicitly considering the entire space of models. Wu and Givan wu2007discovering approximate the value function by using beam search with relational features that are iteratively learned from sampled trajectories. Our approach uses gradient boosting where we learn sets of conjunctive features as paths from the root to the leaf of each RRT and lift the arguments of fitted Q-learning in propositional domains [12, 37] to relational domains. Indeed, if we knew that the target Q function belongs to a specific class and use this information to model the value function there is no need of trees and boosting. However, in relational domains this information is almost never available: without difficult feature engineering, the shape of the value function is almost always unknown – a reason why some work directly operated at the policy level [13, 20].

More precisely, we make the following key contributions: (1) We develop a unified framework for handling relational RL problems by exploring the use of relational trees to model the Q-values and Bellman residuals. (2) We outline the connections between the boosted approach to the classical RL methods for value function approximation (Bellman error, aggregate and tile coding methods). We show that our method can be viewed as encompassing these different approximation techniques. (3) Finally, we demonstrate empirically the effectiveness and the generalization ability of our proposed approach where our results are on par with other RMDP/planning methods without using domain models and with fewer training trajectories. Without extensive feature engineering, it is difficult—if not impossible—to apply standard methods within relational domains in which there is a varying number of objects and relations.

The rest of the paper is organized as follows. After introducing the necessary background on RL, value function approximation and relational learning in the next section, we outline the algorithm and analyze its properties. Before concluding and presenting future work, we present the empirical evaluation of the proposed approach on three classical Relational RL domains.

Figure 1: Proposed framework for GBQL

2 Background

Markov Decision Processes (MDPs):

An MDP is described by a tuple where is the state space, is the action space, is the reward function,

is the transition probability defined as

. For infinite horizon problems, a discount factor is specified to trade-off between current and future reward. In state , after taking action , the agent receives a reward . A policy is a mapping from

to a probability distribution over action-space

given a state. The optimal Q-value (expected reward) of a particular state action pair is given by the Bellman optimality equation:

(1)

Bellman Error: For a fixed policy , the true Q function can be complex and might not lie in the subspace of representable value functions considered by the hypothesis class of a function approximator. A projection operator is used to project the true value function to the representable subspace of function represented by the approximator. The Bellman error is the expected difference between value estimates from successor states and current state:

(2)

The estimate , which is an approximation of the true function, is the projected parameterised Q function in the representable function space. The empirical Bellman operator is the first term in the above equation with sampled next states rather than an explicit expectation. The empirical Bellman error can be defined as . Applying the Bellman operator to a Q-function in a representable subspace may result in a Q function in a different subspace, necessitating projecting back to the representable subspace using a projection operator .

Approximate Dynamic Programming (ADP): For large domains, approximate methods can be used to estimate the Q-function so that the function need not be stored for every state-action pair. Some of the popular approaches for Q-function (or value function) approximation in propositional domains like Least Squares Temporal Difference (LSTD) Learning [6], Least Squares Policy Evaluation (LSPE) [42] and Least Sqaure Policy iteration (LSPI) [22]

approximate the Q-function as linear function of the parameter vector i.e:

where is the parameter of the Q-function and is the feature vector representing the state space. These features are also called basis functions and are ideally linearly independent of each other. Our work can be seen as extending this line of work by approximating the Q-functions and/or the Bellman residuals using a non-parameteric relational representations.

The ADP framework is a collection of RL methods where the values or policies are approximated using information from samples. LSTD, for example, uses a least squares approximation to the temporal difference learning paradigm. The main component of ADP approaches such as LSTD is the computation of weights for the approximation of the value function. In relational domains, groups of states are typically indistinguishable w.r.t. the values that they assume allow parameter sharing. For example, in a blocks world, many configurations could have similar ground state values as they exhibit similar properties such as similar towers etc. Consequently, it is possible to learn a lifted representation such as the one by Wu and Givan [41] that can reason over sets of feature values.

Fitted-Q learning: Fitted Q-learning [12] is a form of ADP which approximates the Q-function by breaking down the problem into a series of regression tasks. Hence, each iteration of Fitted-Q iteration is similar to essentially solving a supervised regression problem. At every iteration of Fitted-Q iteration, the empirical Bellman operator is applied to the current Q-function approximation. Each Regression model then tries to find the best parameters to minimize the regression error. The Q-functions learned by such regression models lie in a subspace of all functions represented by the regression model and hence are an approximation of the true function.

We employ Functional Gradient Boosting [14] in relational domains [16, 7] to capture the Bellman error for every Q-learning iteration and obtain lifted

basis functions in a non-parametric fashion. In domains where we require probabilistic policies, a Gibbs distribution over the values can be used to model probabilistic action choices. While it is conceivable to learn policies directly as performed by policy gradient approach of Kersting and Driessens kersting2008non or imitation learning by Natarajan et al. natarajan2011imitation, our methods approximate the Q-values or the Bellman error using gradient boosting.

Tree-based RRL: Our work is inspired by Dzeroski et al. dvzeroski2001relational on learning single regression tree for RRL. This work was later extended by others [10, 8, 9, 15] w.r.t building incremental RRTs, providing guidance to the learner and regression algorithms using kernels and gaussian processes. However, our work is different in two key aspects: (1) We are more sample efficient in that we do learn only from a smaller set of sampled trajectories in each iteration while their methods use all the trajectories when learning the Q function, and (2) we adapt multiple stage wise grown boosted RRTs for Q function approximation in two different ways as discussed later.

3 Gradient Boosted Fitted Q-learning

Set up: We consider learning to act in relational, discrete, noisy domains where the domains consist of multiple interacting objects. We adopt the RMDP definition of Fern et al. [13], where and of the original MDPs are represented by a set of objects , a set of predicates and set of action types . As with their definition, we use facts to denote a predicate grounded with a specific instantiation. For example is a grounding of a predicate . Thus a set of ground facts will then define the state of the MDP and is the set of all the possible facts in the domain. We make a closed-world assumption, in that facts that are not observed are assumed to be false. Similarly, the action is a grounded action type. For example is an action which is obtained by grounding with .

Given this problem definition, the goal is to learn . However, the learning occurs at a symbolic representation, i.e., we learn for all the objects (or partial instantiations of the objects ) and the action types . Consequently, the learned representation is a lifted one while the training examples/observations are at the propositional level. To represent these lifted representations, as mentioned earlier, we use RFGB [27, 28, 26] and a single tree, TILDE [3], for approximating the Q-value functions.

The key idea in RFGB is that like its propositional counterpart, a functional representation is used to model the Q-function values. The gradients are obtained w.r.t. this function and are approximated using regression functions at each step of gradient computation. In relational domains, these regression functions are Relational Regression Trees (RRTs) [3]. In this work, we consider three types of Q-value function approximations – single relational tree, and two variants of boosted relational trees.

Boosted Regression Trees for Gradients:

During learning, in each iteration of RFGB, gradients are computed for each training example w.r.t the loss function and an RRT is learned at each gradient step to fit to these gradients from previous iterations. The intuition is that the direction of these approximately learned gradients is similar to the original gradients. Given a set of positive examples and potentially some negative examples (in some cases, these are computed using closed-world assumption), the goal is to fit a function

as a set of relational regression trees. While previously this idea had been employed in the context of relational classification [28, 21, 31], the use of RFGB for value function approximation, requires extending them to regression.

In the context of regression, a typical loss function used is the squared error. The squared error with respect to a single example is , where is the true (regression) value and is the value estimated by the regression function . The (functional) gradient of an example at the end of m iterations is, , where denotes the sum of all the gradients computed through the iteration of relational functional gradient boosting. Since we are in a functional space, the gradients are calculated with respect to the function and not the parameters. The final regression function after rounds of boosting is .

In our case, is the Q value of the current state-action pair () 111We use to denote the Q-learning iterations, to denote the example index (state-action pairs) and to denote the boosting iterations from here on. after applying the empirical Bellman operator and is the Q-value for the current state-action pair as approximated by the model . in the above equation corresponds to all the relevant features of the current state . Recall that in relational domains, for the same set of predicates, different states could have different number of instantiated (grounded) features. For instance, the predicate can yield different number of blocks in different states. RFGB employs RRTs to lift the representation such that it is independent of the number of ground instances. To account for this, we redefine the loss function as,

(3)

where is the point wise regression value which gets fitted to the new model. refers to the set of sampled training trajectories.

1:function GBQL(,,,)
2:      Set := 0
3:     for  do of Q-learning iterations
4:          Stores tuple
5:         for  do
6:               generate mini batch of trajectories
7:              
8:               Choose from initial state distribution
9:              Generate starting from by accessing
10:         end for
11:         for  do
12:               Iterate over all in
13:               :=
14:              
15:         end for
16:         
17:     end for
18:     return
19:end function
Algorithm 1 GBQL learning

Thus, the goal is to minimize the difference between the regression value of the current state-action pair (empirical value) and the current Q-value according to the model. For every lifted state-action pair , referred in (3) is obtained by applying the Bellman operator,

(4)

is a learning rate and is the Q-value of () as returned by RFGB through the Q-iteration, refers to the successor state of in the sample. The learning rate is set close to 1 so that the previous estimates of the current state-action values have a small contribution to the new estimate. At each boosting iteration , every training example includes a gradient and a new RRT is trained to fit these . The gradient is computed as,

(5)

where

(6)

is the Q value after iterations of Q-learning and iterations of boosting. The final regression value over a lifted state-action pair is the sum of regression values after all rounds of boosting:

(7)

The final Q-function, after Q-iterations, is , and it defines a value for each lifted state as a sum of gradients.

1:function TreeBoost(,)
2:      Set :=LearnRRT()
3:      Learn relational regression tree to fit
4:     for  do Iterate through gradient steps
5:         
6:         =Gengradients(,,) Compute gradients, for action
7:         =LearnRRT(,) Learn relational regression tree to fit the gradients
8:         
9:     end for
10:     return
11:end function
Algorithm 2 TreeBoost learning
Figure 2: An example Relational Regression Tree representing the structure of the Q-function for Unload action on Logistics domain.

An example of a RRT learned in the logistics domain is shown in Fig. 2, where the inner nodes represent a first order conjunction (test). The left branch indicates that the test is satisfied and the right branch is when it does not hold. The root node checks if a truck is in the destination city at the current state. This state will have the highest Q-value. The leaf values denote the Q value of the state-action pair that satisfy any particular path. Note that all the states and actions in this Figure are lifted i.e., parameterized. In the general case, these could be partially grounded, for instance TruckIn(A,Paris,E). RFGB can learn at the fully lifted level, partially instantiated level or the ground level based on the language bias provided during learning.

With RRTs for computing gradients at hand, we now present the algorithm for learning the approximate Q-functions of relational MDPs using RFGB. Specifically, Algs 1 and  2 present the outline of our proposed approach, called Relational Gradient-Boosted Q- Learning (GBQL). GBQL takes as input the number of Q-learning iterations N, the number of iterations for basis function computation M, the number of training trajectories for each iteration of Q-learning p and an access to the domain simulator (a setting similar to that of Parr et al. parr2007analyzing). At each iteration a set of trajectories are sampled from the simulator . The relational simulator takes as input the current state and action at time step t and returns the next relational state . For every pair, the simulator also returns a reward function which is designed according to the problem being solved. At every Q-learning iteration, mini batches of trajectory are sampled from the simulator. A relational trajectory is defined by a sequence of relational facts and actions taken over a sequence of time steps until a fixed predefined goal is reached, i.e where is the goal state. The initial state is chosen from an initial state distribution . For every tuple in the trajectory ( such tuples in every iteration is appended to a set ) , the value for the current state-action pair is updated using the Bellman operator (line ). Next, every state-action pair and its corresponding q-value after applying Bellman operator are added to the training set for a function call to the TREEBOOST algorithm (lines ).

For every tuple in the data set , the goal is to learn a set of RRTs to approximate the Q-values compactly. In our case, these correspond to finding the combinations of the set of (relational) features that best approximate the value of the current state-action pair. Note that in each iteration of the TREEBOOST procedure (indexed by ), a single regression tree is learned and added to the initial model (line 2). The LEARNRRT function takes the examples and the initial q-values

as input and learns a single RRT. An RRT is learnt over relational features by scoring each test node (we use weighted variance as scoring function) and choosing the best node to split. In the next boosting iteration, the regression value is updated with the difference between the original value

and the value returned by the current set of trees as shown in (5) – function call to GENGRADIENTS in line 6. The key here is that TREEBOOST is essentially performing a series of gradient steps to best fit the Q value of each state-action pair according to the sampled training set. Each tree corresponds to a single gradient in that direction, and the final value is the sum of all the values from the different regression trees as shown in (7). Hence, each call to TREEBOOST returns single set of regression trees for every lifted action type in .

Now, in the main GBQL procedure, the set of trees (Q-function) learned in the previous iteration are used to update the Q value of state action pair at the current iteration while applying the Bellman operator. To generate the next set of trajectories for the next iteration from the simulator , we follow an -greedy policy where we choose a random action with a probability and the best action with respect to the current Q-function estimates with probability for a given state. Figure 1 shows how the simulator is integrated with the GBQL framework.

These set of learned regression trees at any iteration are the (relational) basis functions for that iteration. One could also not boost the trees but directly fit a single RRT to the q-values. These form the two different methods for fitted-q as shown in Fig. 4.

Figure 3: Boosted RBFQ Q-function representations for fitting Bellman error.
Figure 4: Boosted GBQL and non-boosted (RRT) Q-function representations for fitting Bellman error.

Single Regression Tree for Bellman Residuals: We adapt the work of Tosatto et al. tosattoPDR17 for relational domains and present an alternate method of boosting. Fig. 3 presents an overview of our proposed approach, which we call Relational Boosted Fitted-Q (RBFQ). We explicitly calculate the Bellman residual from samples in every iteration of Q-learning as,

(8)

where is the Q-value of () as returned by RBFQ through the Q-iteration and is defined as

(9)

is the Q-value of as returned by the Q-iteration. in Equation 8 is equivalent to the Bellman error estimate defined in Equation 2 except that instead of expected Bellman error over all possible successor states sampled from the underlying transition function, this is calculated directly from the sample. We compute the Bellman residual for each state-action pair in the trajectory set and fit a single weak RRT. The representation learnt for the Bellman residual is a lifted representation where every inner node of the RRT contains conjunction of first order logic predicate and each leaf node contains the approximated Q-value of sets of state-action pair that satisfies the test conditions along a branch. For Q-learning iteration , the learnt Q-function is an additive one over all the previous Q-functions approximated so far as in Equation 9.

Note that while RBFQ algorithm is similar in spirit to GBQL, there are several key differences. While in GBQL, we apply the Bellman operator and learn the first RRT in every Q iteration, here,we calculate the Bellman residual ( and learn the RRTs. The Bellman residuals can be seen as the gradients for GBQL, however, these gradients are learnt in different Q-learning iterations unlike GBQL. Another key difference is that in GBQL, in every Q-learning iteration, the Bellman operator is applied once and then the gradients are fitted by a series of Gradient Boosted RRTs in the same iteration. However in RBFQ, in every iteration, an RRT is learnt for Bellman residual and in subsequent Q learning iterations, an additive model over all the previously learnt Q-functions are considered for calculating the Bellman residual. We hypothesize that directly fitting the Bellman residuals can lead to better fits in some important states, approximating the Q-values could result in an improvements on the entire state space on an average. We verify this hypothesis empirically.

4 Relation to Classical ADP Approaches

Aggregate approaches group (cluster) states and learn values over these clusters. For instance, let denote the set of states and the clusters. Then the value of a state is calculated as, where is the weight on the cluster. In our method, the grouping is obtained naturally. Each path from root to leaf is a first-order logic clause which is a lifted representation of the ground states. can be viewed as the weights learned in the leaves of each branch of a tree. Since each state only satisfies one path, each tree will yield only one for a state. A key advantage of GBQL over the aggregate features is that in the latter, the transition function of one state from a cluster to another is not easy to compute and need to be designed carefully, a problem that is avoided in our model-free setting.

It is possible to view our GBQL procedure as a projection into a space of piecewise-constant value functions, but such value functions have limited representational power. One way to get greater representational power would be to keep trees or entire Q-functions from previous iterations. Previous Q-functions would now become basis vectors and Q-function approximation could be used to combine these basis vectors to form a new Q-function. We will explore this connection in the future. Currently, we discard the trees from the previous iteration for efficiency and scalability to large relational tasks in GBQL.

Indeed, a related method for solving ADP is that of tile-coding [40], a linear piece-wise approximator that partitions the state space into (potentially overlapping) regions called tiles. These are essentially axis-parallel hyper-cuboids learned from the original state space. It is easy to see that our method can be considered as learning these tiles since each tree can be considered as defining a tile in the original space. While the original method [1]

also used Bellman error to identify the tiles, it was a heuristic based method.

GBQL on the other hand, can be viewed as performing a gradient descent in the Bellman error space (when viewed in a functional form).

Another direction related to our work is the recent work on Deep RL. Deep Q networks (DQN), first introduced by Mnih et al. mnih2015human and later extended by others [17, 38, 25, 2]

can also be seen as fitted Q-iteration where the Q-functions are approximated by Deep Neural networks. Similar to DQN which uses a non-linear function approximator to estimate the Q values, we use gradient boosted trees which are also non-linear function approximators. The major difference between this line of work with ours is that while they operate at propositional level and approximate the Q-values of every state action pair encountered, we lift the Q-functions to the relational setting where the gradient boosted first order trees are used to capture the structure of the domain. Recently, Zambaldi et al. zambaldi2018deep proposed to use attention mechanism to capture the interactions between the various entities in the domain. While this work captures the relation existing in structured domain, they are implicitly captured by the attention aware Neural network architecture  

[34] and therefore relations are not specified before learning. We on the other hand, employ a symbolic representation that specifies relationships between entities through expressive first order logic predicates explicitly. Thus we allow for encoding domain and common sense knowledge in a meaningful way. Finally, Jiang et al. zhengyao2019neural used symbolic methods with Neural networks by using differentiable ILP which can directly feed in as a differentiable function to Neural networks. While they use policy gradient and operate in policy space, we use lifted representation in the Q-function space and non-differentiable ILP to capture symmetries existing in the state space.

5 Experimental Evaluation

(a) Stack
(b) Unstack
(c) On
(d) Logistics
Figure 5: Mean absolute Bellman training error for different RRL domains
(a) Stack
(b) Unstack
(c) On
(d) Logistics
Figure 6: Average cumulative reward as a function of the number of Q-learning instances. Higher is better.

We evaluated our proposed GBQL and RBFQ algorithms on tasks from the well known Blocks-world domain and the Logistics domain. These domains are rich in structure and are in general considered challenging to many planning/learning algorithms. Unlike several known algorithms, ours is a model-free algorithm that can potentially learn from a small number of trajectories/instances. To test for generalization, we vary the number of objects between training and testing.

Through the evaluation, we aim to answer the following questions explicitly:

  1. [leftmargin=2.2]

  2. How does the training error (Bellman residual) vary w.r.t the number of iterations?

  3. How does the test set reward change over time?

  4. How does the performance vary with a differing number of objects during testing (i.e., generalization performance)?

  5. How does GBQL and RBFQ compare against each other?

To this effect, we developed a strong baseline. Specifically, we replace the gradient-boosting of fitted Q-values with a single relational tree (RRT). This is to say that in line of the GBQL algorithm, we replace the function call to the boosting algorithm with a function call to a single relational tree learner. Before we explain the experimental baseline, we present the tasks.

  1. Blocks World (Stack task): The blocks world domain [35] consists of blocks stacked on top of each other to form towers. The goal of the agent in this task is to stack all the blocks on a single tower. The initial state consists of a random configuration of the blocks on the floor. The state representation for this task consists of predicates like: clear, On, heightlessthan, isFloor etc. The action predicate in this domain is move and the agent learns to move a block from one tower to another to build a single tower. The reward function was set to for the goal and ) for the intermediate steps, where refers to the height of the highest tower in a state. The intuition is that configurations closer to the goal state will get a higher reward.We trained the agent on problems with , and number of blocks and tested the policy learnt on problems containing and blocks to demonstrate generalization ability.

  2. Blocks World (Unstack task): The Unstack task is another subproblem in the blocks world domain where the goal of the agent is to unstack all the blocks on the floor. Similar to stack task, the initial state consists of a random configuration of blocks on the floor. The number of predicate is the same and the action predicate for this domain is move where the agent can move a block from one tower to another or to the floor. The reward function is for the goal and for the intermediate steps where refers to the fraction of blocks on floor in the current state. Reaching the goal configuration is more difficult than the stacking task, hence the reward for the goal state is set to a higher value. The intuition behind the intermediate reward is that the states where a higher fraction of blocks are on the floor are closer to goal and hence should be penalized less. For this domain, we train the agent on , and blocks and test the policy learnt on problems containing blocks.

  3. Blocks World (ON task): The goal of the agent is to stack a specific block on top of another block. This problem is comparatively difficult because the optimal policy here is hierarchical in nature and the order of the subgoals needs to be learnt. For example, in order to move a block on top of another block, (1) both the blocks should be clear (2) the upper block should be moved on top of the target block. The state consists of predicates like clear, on, sametower, isFloor, goalon etc. The goalon predicate is used to represent the goal state; goalon(b1,b2) means that the goal is to put block b2 on top of b1. The action predicate is move where the agent can move a block on top of another block or on the floor. The reward is +10 for the goal state. For the intermediate states, a small negative reward is provided when a block is moved from a different tower than the goal tower. This is because blocks should be moved from the towers containing the goal blocks in order to make them clear. We train the agent on 4 blocks and test the policy learnt on problems containing 5, 6 and 7 blocks.

  4. Logistics: This is another classical domain in RRL [4]. This domain consists of entities like trucks, boxes and cities. The trucks can move from one city to another. The goal of the agent for this task is to unload at least one box in the destination city. The initial state consists of a random configuration of trucks and boxes in all the cities except for the destination city. The state consists of predicates like: boxOn, truckIn, boxIn, destination etc. The actions include the following target predicates: load, unload and move. The reward function is for the goal and for the intermediate steps. We train the agent on problems containing cities, trucks and boxes. We evaluate the learnt policy on problems containing cities, trucks and boxes.

Evaluation Methodology: In every iteration of GBQL and RBFQ, we sampled trajectories from a random initial state and test the Q-function learnt by executing the policy greedily on a random test trajectories. To keep the evaluation fair, we varied the number of objects between train and test trajectories as mentioned earlier for each task. Each of the training and test trajectories is initialized by a random initial state sampled from the initial state distribution of the domain. All the experiments are averaged over runs. In subsequent Q-learning iterations, we sample trajectories from the simulator using -greedy policy and the is decayed over time by a fixed decay. For every domain, we used of sampling from the previous history also typically known as experience replay [23] in RL literature. For GBQL, based on the performance on the training trajectories, we chose the number of gradient boosted trees at each iteration to be between and .

Baselines: As mentioned earlier, we chose a single regression tree (RRT) as the strong baseline. This RRT baseline is similar to Dvzeroski et al.’s work [11] with a key difference that we approximate the Q-values by sampling trajectories from the current policy as opposed to learning from entire experience. Comparison with Deep RL methods is out of scope for this work as the goal is not necessarily to demonstrate the usefulness of gradient-boosting as a function approximator but to demonstrate the ability to incorporate symbolic representations faithfully during learning. Also, as demonstrated in several prior work on Statistical Realtional AI models [7], constructing a flat feature vector from a symbolic structure to train a deep model can lead to loss of information including auto-correlation and linkage [18].

Domain RRT GBQL RBFQ
Stack
Unstack
On
Logistics
Table 1: goals reached on test trajectory by RRT, GBQL and RBFQ after executing the learnt Q-function

Results: To answer Q1, it can be observed from the results reported in Figure 5 that the proposed GBQL and RBFQ methods clearly demonstrate a decreasing Bellman error as training progresses on the different tasks. For the stacking task, the bellman error decreases almost uniformly for both GBQL and RRT. For RBFQ, the mean absolute bellman error reduces significantly in the beginning but increases with iterations and converges to almost the same point as the other two algorithms. However, for the logistics domain, it can be seen that GBQL converges faster than RRT. For the unstack task, it can be seen that the error curve for both GBQL and RRT follow the same shape and decreases steeply, however, the Bellman error for GBQL converges faster than RRT.

For the unstack task, since it is a challenging one, we injected a few expert trajectories instead of fully random exploration in the early iterations. This can be a possible reason for why the Bellman error decreases steeply for this domain. However, induction of expert trajectories does not have any effect on RBFQ. Also, it seems from the results that inductive bias in the form of expert trajectories had more impact on the gradient boosted GBQL model than RRT as can be seen from the early convergence of GBQL. For the ON task, bellman error for both GBQL and RRT follows the same shape and converges, however, the bellman error for RBFQ is quite high and does not reduce with increasing iterations. This allows us to answer Q1 affirmatively.

The average cumulative reward accumulated over the unseen test trajectories with varying number of objects is shown in Figure 6. The average cumulative reward increases in the early iterations and converges as can be seen from the Figure 6(d) especially for logistics domain. Also, in the logistics domain, the average reward for RBFQ converges much faster than GBQL and RRT and exhibits less variance. The cumulative reward for unstack task shows a high variance for both GBQL and RRT. We speculate that the reason for this is the use of reward shaping which leads to higher variance in this domain. However, in spite of this heuristic being used for the Stacking task, the variance of Stack task is lesser than that of Unstack task. This is because on average, the length of test trajectory to reach the goal is higher for Unstack task than Stack task and thus, the higher variance. For the ON task, GBQL and RRT converges much faster than RBFQ; however, the cumulative reward collected by RBFQ increases over time. The above discussions address the question Q2 in that the average reward does increase over time.

To address the issue of generalization, we consider the fraction of times that the goal is reached in all the tasks after 20-25 iterations of training in all the domains. Given significant time, all the methods will result in solving most problems. Hence, we computed the percentage of goals reached as the fraction of problems in which the solution was achieved within a certain threshold of the optimal number of steps ( more steps). As can be observed from Table 1, GBQL and RBFQ achieves a better test set performance than RRT across in 3 out of 4 tasks. This answers Q3 affirmatively in that better generalization is achieved across all the domains by the proposed gradient boosted learning. Finally, it appears that from our experiments, RBFQ achieves lower Bellman error on some critical states but has a higher error across the states leading to an overall higher error. Our immediate future direction is to investigate deeper the question of whether directly approximating the Bellman error with some guidance can lead to a better average reward. Currently, Q4 does not have a definitive answer. The results are split between RBFQ and GBQL methods.

Our initial results demonstrate that both RBFQ and GBQL cannot capture effective policies in a hierarchical domain like the ON task. In such a domain, RRT performs better than the two boosted algorithms though the changes are not statistically significant. The Bellman error for RBFQ increases sometimes like in the logistics domain unlike the other two algorithms where it always decreases. This suggests that since RBFQ derives it’s Q-function as a combination of all the basis function learnt so far, the projection operation on the representable subspace of function is not accurate, hence leading to increase in Bellman error. However, RBFQ is agnostic to inductive bias in the form of expert (or even noisy) trajectories unlike the other two algorithms as can be seen from Figure 5(b) of Unstack task where there is a significant reduction in Bellman error in early iterations for GBQL and RRT. Also, average cumulative reward on unseen trajectories shows a high variance for Unstack task(Figure 6(b)) for GBQL and RRT; however RBFQ is stable and shows lower variance suggesting that RBFQ is stabler to complex reward functions given we use reward shaping in this task to guide the intermediate steps.

6 Conclusion

We introduced Relational Approximate Dynamic Programming (ADP) and presented the first set of algorithms GBQL and RBFQ

that approximates the Q-values or the Bellman error using a non-parameteric gradient-boosting method. The intuition underlying these algorithms is that one can approximate the value function over a set of objects and relations as a set of RRTs learned in a sequential manner. The Bellman operator application step corresponds to the evaluation of these trees for a given state action pair and the projection step corresponds to the learning of these trees using gradient-boosting. Our experiments clearly demonstrate the effectiveness of this approach in factored state space. Most importantly, gradient boosting paves the way to deal jointly with propositional and relational features; one only has to adapt the gradient regression examples correspondingly. We also demonstrated the generalization ability of the algorithms with our empirical evaluation for intra-domain transfer learning.

These initial results suggest several interesting avenues for future work. First is evaluating the algorithm on larger problems with hybrid tasks. Second is exploring various sampling strategies in GBQL for picking trajectories from current policy. Third is extending our work to generalized continuous state-action spaces, multi-agent settings and potentially Partially Observable Markov Decision Processes (POMDPs). Next is understanding how rich human inputs can be used to guide the algorithms. Since we are in a symbolic setting, it must be possible for the human to directly provide advice to the system at a high-level. Exploring the use of such knowledge in the context of effective learning remains an interesting direction. Finally, an effective combination of policy gradient and fitted VI methods needs to be explored.

Acknowledgements

SN, SD & RP gratefully acknowledge the support of NSF grant IIS-1836565. KK acknowledges the support of the Federal Ministry of Economic Affairs and Energy (BMWi) project “Scalable adaptive production systems through AI-based resilience optimization” (SPAICER, funding reference 01MK20015E), funded within the AI innovation competition “Artificial intelligence as a driver for economically relevant ecosystems“. Any opinions, findings and conclusion or recommendations are those of the authors and do not necessarily reflect the view of the US government.

References

  • [1] J. S. Albus (1981) Brains, behavior, and robotics. Byte books Peterborough, NH. Cited by: §4.
  • [2] O. Anschel, N. Baram, and N. Shimkin (2017) Averaged-dqn: variance reduction and stabilization for deep reinforcement learning. In ICML, Cited by: §4.
  • [3] H. Blockeel and L. De Raedt (1998)

    Top-down induction of first-order logical decision trees

    .
    AIJ. Cited by: §1, §3, §3.
  • [4] C. Boutilier, R. Reiter, and B. Price (2001) Symbolic dynamic programming for first-order mdps. In IJCAI, Cited by: item 4.
  • [5] J. A. Boyan and A. W. Moore (1995) Generalization in reinforcement learning: safely approximating the value function. NIPS. Cited by: §1.
  • [6] J. A. Boyan (1999) Least-squares temporal difference learning. In ICML, Cited by: §2.
  • [7] L. De Raedt, K. Kersting, S. Natarajan, and D. Poole (2016) Statistical relational artificial intelligence: logic, probability, and computation. Morgan & Claypool Publishers. Cited by: §2, §5.
  • [8] K. Driessens and S. Džeroski (2004) Integrating guidance into relational reinforcement learning. MLJ. Cited by: §2.
  • [9] K. Driessens, J. Ramon, and H. Blockeel (2001) Speeding up relational reinforcement learning through the use of an incremental first order decision tree learner. In ECML, Cited by: §2.
  • [10] K. Driessens and J. Ramon (2003) Relational instance based regression for relational reinforcement learning. In ICML, Cited by: §2.
  • [11] S. Džeroski, L. De Raedt, and K. Driessens (2001) Relational reinforcement learning. MLJ. Cited by: §5.
  • [12] D. Ernst, P. Geurts, and L. Wehenkel (2005) Tree-based batch mode reinforcement learning. JMLR. Cited by: §1, §1, §2.
  • [13] A. Fern, S. Yoon, and R. Givan (2006) Approximate policy iteration with a policy language bias: solving relational markov decision processes. JAIR. Cited by: §1, §3.
  • [14] J. H. Friedman (2001) Greedy function approximation: a gradient boosting machine. Annals of statistics. Cited by: §2.
  • [15] T. Gärtner, K. Driessens, and J. Ramon (2003) Graph kernels and gaussian processes for relational reinforcement learning. In ILP, Cited by: §2.
  • [16] L. Getoor (2007) Introduction to statistical relational learning. MIT press. Cited by: §2.
  • [17] M. Hausknecht and P. Stone (2015) Deep recurrent q-learning for partially observable mdps. In AAAI Fall Symposium Series, Cited by: §4.
  • [18] D. Jensen and J. Neville (2002)

    Linkage and autocorrelation cause feature selection bias in relational learning

    .
    In ICML, Cited by: §5.
  • [19] P. W. Keller, S. Mannor, and D. Precup (2006) Automatic basis function construction for approximate dynamic programming and reinforcement learning. In ICML, Cited by: §1.
  • [20] K. Kersting and K. Driessens (2008) Non-parametric policy gradients: a unified treatment of propositional and relational domains. In ICML, Cited by: §1.
  • [21] T. Khot, S. Natarajan, K. Kersting, and J. Shavlik (2011) Learning markov logic networks via functional gradient boosting. In ICDM, Cited by: §3.
  • [22] M. G. Lagoudakis and R. Parr (2003) Least-squares policy iteration. JMLR. Cited by: §2.
  • [23] L. Lin (1992) Self-improving reactive agents based on reinforcement learning, planning and teaching. MLJ. Cited by: §5.
  • [24] I. Menache, S. Mannor, and N. Shimkin (2005) Basis function adaptation in temporal difference reinforcement learning. Annals of Operations Research. Cited by: §1.
  • [25] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. In ICML, Cited by: §4.
  • [26] S. Natarajan, S. Joshi, P. Tadepalli, K. Kersting, and J. Shavlik (2011) Imitation learning in relational domains: a functional-gradient boosting approach. In IJCAI, Cited by: §3.
  • [27] S. Natarajan, K. Kersting, T. Khot, and J. Shavlik (2014) Introduction. In Boosted Statistical Relational Learners, Cited by: §1, §3.
  • [28] S. Natarajan, T. Khot, K. Kersting, B. Gutmann, and J. Shavlik (2012) Gradient-based boosting for statistical relational learning: the relational dependency network case. MLJ. Cited by: §3, §3.
  • [29] R. Parr, C. Painter-Wakefield, L. Li, and M. Littman (2007) Analyzing feature generation for value-function approximation. In ICML, Cited by: §1.
  • [30] B. Price and C. Boutilier (2001) Imitation and reinforcement learning in agents with heterogeneous actions. In Conference of the Canadian Society for Computational Studies of Intelligence, Cited by: §1.
  • [31] N. Ramanan, G. Kunapuli, T. Khot, B. Fatemi, S. M. Kazemi, D. Poole, K. Kersting, and S. Natarajan (2018)

    Structure learning for relational logistic regression: an ensemble approach

    .
    In KR, Cited by: §3.
  • [32] M. Riedmiller (2005) Neural fitted q iteration-first experiences with a data efficient neural reinforcement learning method. In ECML, Cited by: §1.
  • [33] S. Sanner and C. Boutilier (2009) Practical solution techniques for first-order mdps. AIJ. Cited by: §1.
  • [34] A. Santoro, R. Faulkner, D. Raposo, J. Rae, M. Chrzanowski, T. Weber, D. Wierstra, O. Vinyals, R. Pascanu, and T. Lillicrap (2018)

    Relational recurrent neural networks

    .
    In NIPS, Cited by: §4.
  • [35] J. Slaney and S. Thiébaux (2001) Blocks world revisited. AIJ. Cited by: item 1.
  • [36] P. Tadepalli, R. Givan, and K. Driessens (2004) Relational reinforcement learning: an overview. In ICML-Workshop on Relational Reinforcement Learning, Cited by: §1.
  • [37] S. Tosatto, M. Pirotta, C. D’Eramo, and M. Restelli (2017) Boosted fitted q-iteration. In ICML, Cited by: §1.
  • [38] H. Van Hasselt, A. Guez, and D. Silver (2016) Deep reinforcement learning with double q-learning. In AAAI, Cited by: §4.
  • [39] C. Wang, S. Joshi, and R. Khardon (2008) First order decision diagrams for relational mdps. JAIR. Cited by: §1.
  • [40] S. Whiteson, M. E. Taylor, P. Stone, et al. (2007) Adaptive tile coding for value function approximation. Computer Science Department, University of Texas at Austin. Cited by: §4.
  • [41] J. Wu and R. Givan (2007) Discovering relational domain features for probabilistic planning.. In ICAPS, Cited by: §1, §2.
  • [42] H. Yu and D. P. Bertsekas (2009) Convergence results for some temporal difference methods based on least squares. IEEE Transactions on Automatic Control. Cited by: §2.