Off-policy evaluation for MDPs with unknown structure

02/11/2015 ∙ by Assaf Hallak, et al. ∙ 0

Off-policy learning in dynamic decision problems is essential for providing strong evidence that a new policy is better than the one in use. But how can we prove superiority without testing the new policy? To answer this question, we introduce the G-SCOPE algorithm that evaluates a new policy based on data generated by the existing policy. Our algorithm is both computationally and sample efficient because it greedily learns to exploit factored structure in the dynamics of the environment. We present a finite sample analysis of our approach and show through experiments that the algorithm scales well on high-dimensional problems with few samples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement Learning (RL) algorithms learn to maximize rewards by analyzing past experience with an unknown environment. Most RL algorithms assume that they can choose which actions to explore to learn quickly. However, this assumption leaves RL algorithms incompatible with many real-world business applications.

To understand why, consider the problem of on-line advertising: Each customer is successively presented with one of several advertisements. The advertiser’s goal is to maximize the probability that a user will click on an ad. This probability is called the Click Through Rate (CTR,

richardson2007predicting

). A marketing strategy, called a policy, chooses which ads to display to each customer. However, testing new policies could lose money for the company. Therefore, management would not allow a new policy to be tested unless there is strong evidence that the policy is not worse than the company’s existing policy. In other words, we would like to estimate the CTR of other strategies using only data obtained from the company’s existing policy. In general, the problem of determining a policy’s value from data generated by another policy is called

off-policy evaluation, where the policy that generates the data is called the behavior policy, and the policy we are trying to evaluate is called the target policy. This problem may be the primary reason batch RL algorithms are hardly used in applications, despite the maturity of the field.

A simple approach to off-policy evaluation is given by the MFMC algorithm (Fonteneau2010), which constructs complete trajectories for the target policy by concatenating partial trajectories generated by the behavior policy. However, this approach may require a large number of samples to construct complete trajectories. One may think that the number of samples is of little importance, since Internet technology companies have access to millions or billions of transactions. Unfortunately, the dimensionality of real-world problems is generally large (e.g., thousands or millions of dimensions) and the events they want to predict can have extremely small probability of occurring. Thus, sample efficient off-policy evaluation is paramount.

An alternative way of looking at the problem is through counterfactual (CF) analysis (Bottou2013). Given the outcome of an experiment, CF analysis is a framework for reasoning about what would have happened if some aspect of the experiment was different. In this paper, we focus on the question: what would have been the expected reward received for executing the target policy rather than the behavior policy? One approach that falls naturally into the CF framework is Importance Sampling (IS) (Bottou2013; Li2014)

. IS methods evaluate the target policy by weighting rewards received by the behavior policy. The weights are determined by the probability that the target policy would perform the same action as the one prescribed by the behavior policy. Unfortunately, IS methods suffer from high variance and typically assume that the behavior policy visits every state that the target policy visits with nonzero probability.

Even if this assumption holds, IS methods are not able to exploit structure in the environment because their estimators do not create a compact model of the environment. Exploiting this structure could drastically improve the quality of off-policy evaluation with small sample sizes (relative to the dimension of the state-space). Indeed, there is broad empirical support that model-based methods are more sample efficient than model-free methods (Hester2009; Jong2007)

. However, one broad class of compact models are Factored-state Markov Decision Processes (FMDPs,

kearns1999efficient; Strehl2007; chakraborty2011structure). An FMDP model can often be learned with a number of samples logarithmic in the total number of states, if the structure is known. Unfortunately, inferring the structure of an FMDP is generally computationally intractable for FMDPs with high-dimensional state-spaces (chakraborty2011structure), and in real-world problems the structure is rarely known in advance.

Ideally, we would like to apply model-based methods to off-policy evaluation because they are generally more sample efficient than model-free methods such as MFMC and IS. In addition, we want to use algorithms that are computationally tractable. To this end, we introduce G-SCOPE, which learns the structure of an FMDP greedily. G-SCOPE is both sample efficient and computationally scalable. Although G-SCOPE does not always learn the true structure, we provide theoretical analysis relating the number of samples to the error in evaluating the target policy. Furthermore, our experimental analysis demonstrates that G-SCOPE is significantly more sample efficient than model-free methods.

The main contributions of this paper are:

  • a novel, scalable method for off-policy evaluation that exploits unknown structure,

  • a finite sample analysis of this method, and

  • a demonstration through experiments that this approach is sample efficient.

The paper is organized as follows. In Section 2, we describe the problem setting and notations. Section 3 elaborates on our greedy structure learning algorithm. Our main theorem and its analysis are given in Section 4. Section 5 presents experiments. In Section 6, we discuss limitations of G-SCOPE and future research directions.

2 Background

We consider dynamics that can be represented by a Markov Decision Process (MDPs; puterman2009markov):

Definition 1.

A Markov Decision Process (MDP) is a tuple where is the state space, is the action space, represents the transition probabilities from every state-action pair to another state, represents the reward function fitting each state-action pair with a random real number, and is a distribution over the initial state of the process.

We denote by a Markov policy that maps states to a distribution over actions. The process horizon is , and applying a policy for steps starting from results in a cumulative reward known as the value function: , where the expectation is taken with respect to and . We assume is known and immediate rewards are bounded in .

The system dynamics is as follows: First, an initial state is sampled from . Then, for each time step , an action is sampled according to the policy , a reward is awarded according to and the next state is sampled by . The quantity of interest is the expected policy value .

2.1 Off-Policy Evaluation

We consider the finite horizon batch setup. Given are trajectories of length sampled from an MDP with an initial state distribution and behavior policy . The off-policy evaluation problem is to estimate the -step value of a target policy (different from ). For the target policy , we aim to minimize the difference between the true and estimated policy value:

(1)

2.2 Factored MDPs

Suppose the state space can be decomposed into discrete values. We denote the variable of by , and for a given subset of indices , let be the subset of corresponding variables . We define a factored MDP, similar to guestrin2003efficient:

Definition 2.

A Factored MDP (FMDP) is an MDP such that the state is composed of a set of variables , where each variable can take values from a finite domain, such that the probability of the next state given that action is performed in state satisfies

(2)

For simplicity, we assume that all variables lie in the same domain , i.e., , where is a finite set. Furthermore, each variable in the next state only depends on a subset of variables where . The indices in are called the parents of . When the size of the parent sets are smaller than , then the FMDP can be represented more compactly:

(3)

Before delving into the algorithm and the analysis, we provide some notation. For a subset of indices , a realization-action pair is a specific instantiation of values for the corresponding variables . We denote by the set of all realization-action pairs for the parents of node , and mark .

The following quantities are used in the algorithm and consecutive analysis: Denote by a subset of indices and by a realization of the corresponding variables:

(4)

where the probabilities in the right term of the first equation are conditioned on the behavior policy omitted for brevity. Note that if then , and the policy dependency cancels out.

2.3 Previous Work

Previous works on FMDPs focus on finding the optimal policy. Early works assumed the dependency structure is known (guestrin2002algorithm; kearns1999efficient). degris2006learning proposed a general framework for iteratively learning the dependency structure (this work falls within this framework), yet no theoretical results were presented for their approach. SLF-Rmax (Strehl2007), Met-Rmax (diuk2009adaptive) and LSE-Rmax (chakraborty2011structure) are algorithms for learning the complete structure. Only the first two require as input the in-degree of the DBN structure. The sample complexity of these algorithms is exponential in the number of parents. Finally, learning the structure of DBNs with no related reward is in itself an active research topic (Friedman:1998aa; Trabelsi:2013aa).

There has also been increasing interest in the RL community regarding the topic of off-policy evaluation. Works focusing on model-based approaches mainly provide bounds on the value function estimation error. For example, the simulation lemma (Kearns2002a) can be used to provide sample complexity bounds on such errors. On the other hand, model free approaches suggest estimators while trying to reduce the bias. precup2000eligibility presents several methods based on applying importance sampling on eligibility traces, along with an empirical comparison; Theocharous2015offPolicyConfidence had analyzed bounds on the estimation error for this method. A different approach was suggested by Fonteneau2010: evaluate the policy by generating artificial trajectories - a concatenation of one-step transitions from observed trajectories. The main problem of these approaches besides the computational cost is that a substantial amount of data required to generate reasonable artificial trajectories.

3 Algorithm

In general, inferring the structure of an FMDP is exponential in (Strehl2007). Instead, we propose a naive greedy algorithm which under some assumptions can be shown to provide small estimation error on the transition function (G-SCOPE - Algorithm 1).

  for  to  do
     
     repeat
        
        For
        if  then
           Break
        end if
        for  to  do
           
           
           
        end for
        
        if  then
           
        end if
     until 
  end for
  return
Algorithm 1 G-SCOPE( -length traj., )

G-SCOPE (Greedy Structure learning of faCtored MDPs for Off-Policy Evaluation) receives off-line batch data, two confidence parameters , and a minimum acceptable score . The outputs are the estimated parents of each variable . In the inner loop, the set is defined as the set of all realization-action pairs which had been observed at least times; These are the only pairs further considered. We then greedily add to the ’th variable which maximizes the difference between the old distribution depending only on , and a distribution conditioned on the additional variable as well. Parents are no longer added when that difference is small, or when all possible realizations were not observed times. The computational complexity of a naive implementation is , since G-SCOPE sweeps the data for every input and output variable.

The main idea beyond G-SCOPE is that having enough samples will result in an adequate estimate of the conditional probabilities. Then, under appropriate regularity assumptions (stated in Section 4), adding a non parent variable is unlikely. If parents have a higher effect than non-parents on the distance and non-parents have a weak effect, the procedure will most likely return only parents. When all prominent parents were found, or when there is not enough data for further inference, the algorithm stops. Once the set of assumed parents is available, we can build an estimated model and simulate any policy.

An important property of the G-SCOPE algorithm is that it does not necessarily find the actual parents. Instead, we settle on finding a subset of variables providing probably approximately correct transition probabilities. As a result, the number of considered parents scales with data available, a desired quality linking the model and sample complexity. Since we do not necessarily detect all parents, non-parents can have a non-zero influence on the target variable after all prominent parents have been detected. To avoid including these non-parents, the threshold to add a parent is plus some precision parameters. In practice, we use because including non-parents with an indirect influence on may improve the quality of the model. However, in our analysis, we present Assumptions under which the true parents can be learned and explain .

Finally, G-SCOPE

 can be modified to encode and construct the conditional probability distributions using decision trees. A different decision tree is constructed for each action and variable in the next state. Tree based models can produce more compact representations of the model than encoding the full conditional probability tables specified by

. While we analyze G-SCOPE as an algorithm that separates structure learning from estimating the conditional probability tables, for simplicity and clarity, in our experiments, we actually use a decision tree based algorithm. The modifications to the analysis for the tree based algorithm would add unnecessary complexity and distract from the key points of the analysis.

4 Analysis

By using a scalable but greedy approach to structure learning rather than a combinatorially exhaustive one, G-SCOPE can only learn arbitrarily well a subclass of models. In this section, we introduce three assumptions on the FMDP that describe this subclass, and then analyze the policy evaluation error for this subclass.

We divide into non-overlapping “weak” () and “strong” () parents. These subsets will be defined formally later, but intuitively, parents in have a large influence on and are easy to detect while parents in have a smaller influence that may be below the empirical noise threshold and hence not be detected. Our assumptions state that (1) “strong” parents are sufficiently better than non-parents to be detected by G-SCOPE before non-parents; (2) conditionally on “strong” parents, non-parent have too little influence on to be accepted by G-SCOPE and (3) conditioning on some “weak” parents does not increase the influence of other “weak” parents. The first two assumptions are used to bound the probability that G-SCOPE adds non parents in or does not add some strong parents, the last one to bound the error caused by the potential non-detection of weak parents.

Assumption 1.

Strong parent superiority. For every , there exists a “strong” subset of parents such that , , , there exists , such that for some ,

(5)
State variables12123311101212012101220111012201110122010.50.5120.50.512
Figure 1: An FMDP that fails to satisfy Assumption 1

. The factorization for a given action (not shown on the figure) is represented as a dynamic Bayesian network. States not relevant for the explanation are omitted. In the conditional transition probability tables, rows correspond to possible values of parent variables and columns to possible values of the variable. Cells at the intersection contain conditional probability values.

Assumption 1 ensures that, in terms of influence on the conditional distribution of the target, G-SCOPE finds at least one “strong” parent variable more attractive than any non-parent variable as long as . This prevents extreme cases where due to large correlation between parents and non-parents factors, large numbers of non-parents could be added before finding the actual parents, thus considerably increasing the sample complexity. quantifies how much more information a true parent will provide than non-parents. The larger the less likely G-SCOPE will add a non-parent in .

Figure 1 illustrates a subset of the state variables and corresponding conditional transition probability distributions of an FMDP that, for the action implicitly considered, does not satisfy Assumption 1. In this setting, for and considering , we have

G-SCOPE would add , a non-parent, before any true parent of in the estimated parent set. Note that in this particular case it does not matter, as perfectly determines . However, adding noise in the transition probabilities would make less accurate than and together.

Assumption 2.

Non-parent conditional weakness. For every , as in Assumption 1, , , for some ,

(6)

Assumption 2 ensures that, after G-SCOPE has detected all strong parents, non-parents have low influence on the target variable and therefore G-SCOPE has a low probability to add them to . If , then .

Assumption 3.

Conditional diminishing returns. There exists such that for every , as in Assumptions 1 and 2, , , , if

(7)

then:

(8)

If conditioning on provides more knowledge on the output distribution than conditioning on another variable , then it will also provide more knowledge than conditioning on given . In simple words, Assumption 3 means that information inferred from variables is monotonic, so influential parents cannot go undetected. This assumption supports our greedy scheme, but there are trivial cases where it does not hold.

Consider the substructure represented in Figure 2:

Even though are together very informative about variable , any single one of them is not. In such a situation, useful variables cannot be detected by a greedy scheme. Assumption 3 prevents this problem.

State variables123111012120121012210Y(1)0.50.512Y(2)0.50.512
Figure 2: An FMDP that does not satisfy Assumption 3. See Figure 1 for an explanation of the representation.

These assumptions form the core hardness of the structure learning problem. From one side, there may be implicit dependencies between variables induced by the dynamics - making it hard to separate non-parents. From the other side, the conditional probabilities may belong to a family of XOR like function - initially hiding attractive true parents. Finally, while these assumptions are crucial for proper analysis, non-parent variables may have a beneficial effect on the actual evaluation error as they still contain information on the true parents values, and subsequently information on the output variable.

Theorem 1.

Suppose Assumptions 1, 2 and 3 hold, and let , and . Then there exists

such that if G-SCOPE is given trajectories, with probability at least , G-SCOPE returns an evaluation of satisfying:

(9)

where

(10)

The proof of Theorem 1 is divided in 4 parts, detailed in the supplementary material. First, we derive a simulation lemma for MDPs stating that for the target policy two MDPs with similar transition probability distributions have proximate value functions. We then consider the number of samples needed to estimate the transition probabilities of various realization-action pairs. Samples within a trajectory may not be independent so we derive a bound based on Azuma’s inequality for martingales. Subsequently, we consider the number of trajectories needed to derive a model that evaluates the target policy accurately. If the behavior policy visits enough the parent realizations that the target policy is likely to visit, then the number of trajectories can be small. On the other hand, if the behavior never visits parent realizations that the target policy visits, then the number of trajectories may be infinite. This is captured by . Finally, we bound the error due to greedy parent selection under Assumptions 1, 2 and 3.

The evaluation error bound depends on the horizon , on the number of variables , on the error bound on most transition probability values of the FMDP constructed by G-SCOPE  and on the probability that a trajectory will not visit a state with badly estimated probability values. The dependency of on is the first advantage of the factorization. The constants , and , from Assumptions 1, 2 and 3, respectively, indicate the effect of the model “hardness” on the bound. When is large enough and , the true structure can be learned greedily and the error can be driven arbitrarily close to . In other cases, G-SCOPE may learn the wrong structure resulting in some approximation error.

Next, observe the probability that the bounds in Theorem 1 hold. The multiplicative term is unavoidable since for each parents realization and action pair the estimation error on the transition probability must be bounded. The main advantage of this theorem is the lack of a multiplicative term, which means the effective state space decreased exponentially. The factor is due to the number of iterations of G-SCOPE where a parent is added, and is due to bounds on non-parents that must be valid for all these iterations.

In , the values characterize the mismatch between the behavior policy and the target policy. If the behavior policy visits all of the parent-action realizations that the target policy visits with sufficiently high probability, then the parameters will be small. But if the target policy visits parent-action realizations that are never visited by the behavior policy, then the values may be infinite. The values are similar to importance sampling weights used by some model-free off-policy algorithms. However, unlike model-free approaches that depend on the differences in the state visitation distributions of the behavior policy and the target policy, the values depend on the differences in the parent realization visitation distributions between the behavior policy and the target policy. This is more flexible because the values can be small even when the behavior policy and the target policy visit different regions of the state-space.

5 Experiments

We compared G-SCOPE to other off-policy evaluation algorithms in the Taxi domain (Dietterich1998), randomly generated FMDPs, and the Space Invaders domain (Bellemare2013). Since the domains compared in our experiments have different reward scales, we normalized the errors to compare . In all experiments, the behavior policy differs from the target policy. Furthermore, evaluation error always refers to the target policy’s evaluation error, and all trajectory data is generated by the behavior policy.

We compare G-SCOPE to the following algorithms:

  • Model-Free Monte-Carlo (MFMC, Fonteneau2010): a model-free off-policy evaluation algorithm that constructs artificial target policies by concatenating partial trajectories generated by the behavior policy,

  • Clipped Importance Sampling (CIS, Bottou2013

    ): a model-free importance sampling algorithm that uses a heuristic approach to clip extremely large importance sampling ratios,

  • Flat : a flat model-based approach that assumes no structure between any two state-action pairs and simply builds an empirical next state distribution for each state-action pair, and

  • Known Structure (KS) : a model-based method that is given the true parents, but still needs to estimate the conditional probability tables from data generated by the behavior policy. KS should outperform G-SCOPE, because KS knows the structure. We introduce KS to differentiate the evaluation error due to insufficient samples from the evaluation error due to G-SCOPE selecting the wrong parent variables.

Our experimental results show that (1) model-based off-policy evaluation algorithms are more sample efficient than model-free methods, (2) exploiting structure can dramatically improve sample efficiency, and (3) G-SCOPE often provides a good evaluation of the target policy despite its greedy structure learning approach.

5.1 Taxi Domain

The objective in the Taxi domain (Dietterich1998) is for the agent to pickup a passenger from one location and to drop the passenger off at a destination. The state can be described by four variables. We selected the initial state according to a uniform random distribution and used a horizon . The behavior policy selected actions uniform randomly, while the target policy was derived by solving the Taxi domain with the Rmax algorithm (Brafman2002). We discovered that the deterministic policy returned by Rmax was problematic for CIS, because the probability of almost all trajectories generated by the behavior policy were 0 with respect to the target policy. To resolve this problem, we modified the policy returned by Rmax to ensure that every action is selected in every state with probability at least .

The Taxi domain is a useful benchmark because we know the true structure and the total number of states is only 500. Thus, we can compare G-SCOPE to KS and Flat.

Figure 3: Taxi domain: Median evaluation error for the target policy (shaded region: quantiles) on log-scale for MFMC, CIS, Flat, KS, and G-SCOPE varying the number of trajectories generated by the behavior policy. Without exploiting structure MFMC and Flat require many trajectories to achieve small evaluation error. Yet, KS and G-SCOPE achieve small evaluation error with just a few trajectories. Because G-SCOPE adapts the complexity of the model to the samples available, it achieve smaller estimation error than even KS for extremely few trajectories.

Figure 3 presents the normalized evaluation error (on a log-scale) for MFMC, CIS, Flat, KS, and G-SCOPE over 2,000 trajectories generated by the behavior policy. Median and quantiles are estimated over 40 independent trials. For intermediate and large number of trajectories, G-SCOPE performs about the same as if the structure is given and achieves smaller error than the model-free algorithms (MFMC and CIS). Notice that MFMC, CIS, and Flat, which do not take advantage of the domains structure, require a large number of trajectories before they achieve low evaluation error. Interestingly, the Flat (model-based) approach appears to be more sample efficient than MFMC, which is in line with observations that model-based RL is more efficient than model-free RL (Hester2009; Jong2007). KS and G-SCOPE, on the other hand, achieve low evaluation error after just a few trajectories and have similar performance, except for very few trajectories where G-SCOPE can adapt the model complexity to the number of samples and therefore achieves a lower evaluation error than the algorithm knowing the structure. This provides one example where greedy structure learning is effective.

5.2 Randomly Generated Factored Domains

To test G-SCOPE in a higher dimensional problem, where we still know the true structure, we randomly generated FMDPs with dimensional states. The domain of each variable was . For each state variable the number of parents was uniformly selected from 1 to 4 and the parents were also chosen randomly. Afterwards, the conditional probability tables were filled in uniformly and normalized to ensure they specified proper probability distributions. The FMDP was given a sparse reward function that returned

if and only if the last bit in the state-vector was

and returned otherwise. We used a horizon . The behavior policy selected actions uniform randomly, while the target policy was derived by running SARSA(Sutton1998) with linear value function approximation on the FMDP for 5,000 episodes with a learning rate , discount factor , and epsilon-greedy parameter . After training SARSA, we extracted a stationary target policy. As in the Taxi domain, we modified the policy returned by SARSA to ensure that every action could be selected in every state with probability at least .

For the randomly generated FMDPs, we could not construct a flat model because there are states and the number of parameters in a flat model scales quadratically with the size of the state-space. However, we could still compare MFMC, CIS, KS, and G-SCOPE.

Figure 4: Random FMDP domain: Average evaluation error ( std. deviation) on log-scale for MFMC, KS, and G-SCOPE (with and trajectories). G-SCOPE has slightly worse performance than Known Structure, but G-SCOPE achieves significantly lower evaluation error than MFMC.

Figure 4 presents the normalized evaluation error (on a log-scale) for MFMC, CIS, KS, and G-SCOPE given and

trajectories from the behavior policy. Average and standard deviations are estimated over 10 independent trials.

MFMC fails because in this high-dimensional task there is not enough data to construct artificial trajectories for the target policy. CIS fairs only slightly better than MFMC, because it uses all of the trajectory data. Unfortunately, most of the trajectories generated by the behavior policy are not probable under the target policy and its evaluation of the target policy is pessimistic. G-SCOPE has slightly worse performance than KS, but G-SCOPE achieves significantly lower evaluation error than MFMC and CIS.

5.3 Space Invaders

In the Space Invaders (SI) domain using the Arcade Learning Environment (Bellemare2013), not only do we not know the parent structure, we also cannot verify that the factored dynamics assumption even holds (2). Thus, SI presents a challenging benchmark for off-policy evaluation. We used the -bit RAM as the state vector. We set the horizon so that the behavior policy would experience a diverse set of states.

As in the previous experiment, the behavior policy selected actions uniformly at random, while the target policy was derived by running SARSA (Sutton1998) with linear value function approximation on the FMDP with a learning rate , discount factor , and epsilon-greedy parameter . We only trained SARSA for 500 episodes, because of the time required to sample an episode. After training, we extracted a stationary target policy, which ensured all actions could be selected in all states with probability at least .

Figure 5: Space Invaders domain: Average evaluation error ( std. deviation) for MFMC, CIS, and G-SCOPE (with and trajectories). G-SCOPE achieves significantly lower evaluation error than MFMC and CIS.

Figure 5 shows the normalized evaluation error for MFMC, CIS, and G-SCOPE given and trajectories from the behavior policy. Averages and standard deviations are estimated over 5 independent trials. Again, the evaluation error of G-SCOPE is much smaller than MFMC and CIS. In fact, MFMC and CIS perform no better than a strategy that always predicts the target policy’s value is . The poor performance of MFMC is due to the impossibility to construct artificial trajectories from samples in such a high dimensional space.

6 Discussion

We presented a finite sample analysis of G-SCOPE that shows how samples can be related to the evaluation error. When , the sample complexity scales logarithmically with number of states, where .

Our experiments show that (1) model-based off-policy evaluation algorithms are more sample efficient than model-free methods, (2) exploiting structure can dramatically improve sample efficiency, and (3) G-SCOPE often provides a good evaluation of the target policy despite using a greedy structure learning approach. Thus, G-SCOPE provides a practical solution for evaluating new policies. Our empirical evaluation on large and small FMDPs shows our approach outperforms existing methods, which only exploit trajectories.

We analyzed G-SCOPE under three assumptions restricting the class of FMDPs that can be considered. These three assumptions imply that (1) including weak parent will not make any other weak parent (significantly) more informative than it was before, (2) strong parents are more relevant than non-parents, and (3) conditioned on the strong parents non-parents are non-informative. We believe that many real-world problems approximately satisfy these assumptions. If the problem under consideration does not satisfy them, then learning algorithms of combinatorial computational complexity in the number of state variables must be considered to correctly identify the true parents (chakraborty2011structure).

To the best of our knowledge, this is the first model-based algorithm and analysis for off-policy evaluation in FMDPs. Moreover, G-SCOPE is a tractable algorithm for learning the structure of an FMDP even if no prior knowledge is given about the order in which variables should be considered. That being said, we hope that showing the effectiveness of structure learning for off-policy evaluation will encourage the adaptation of existing algorithms for learning the structure of FMDPs and more generally dynamic Bayesian networks for off-policy evaluation.

References

Appendix A List of Notations

Notation Meaning
Action space
Time horizon
Time index
Number of trajectories in batch data
Number of factors in each state
The set .
Domain of each factor in a state and (dual notation) the number of possible values for the factor
Markov Decision Process
Distribution of first state in MDP
Input variable (represents previous state)
Output variable (represents next state)
The ’th variable in the output.
A subset of indices
The corresponding subset of variables to
Indices of the parents for variable
, the set of all realization-action pairs for the parents of node
Indices found by G-SCOPE for variable
Number of observations in the data fitting the instance
The set of realization-action pairs observed more than for each
A value signifying policies mismatch (bigger means higher mismatch)

Appendix B Proof of Main Theorem & Supporting Lemmas

The proof of Theorem 1 is broken down into parts.

b.1 The Simulation Lemma

In this subsection, we derive a simulation lemma for MDPs, which essentially says that for a fixed policy two MDPs with similar transition probability distributions will result in similar value functions. Our simulation lemma differs from other simulation lemmas (e.g., Kearns2002a; Kakade2003) in that we only need the guarantee to hold for the target policy. To formalize what we mean by “similar” MDPs, we introduce the following assumption.

Definition 3.

Let be an MDP and . and define an induced MDP , where

and

Definition 4.

Let , be an MDP, and . An -induced MDP with respect to and , satisfies

Assumption 4.

A4 : Let , , be a policy, and . There exists an -induced MDP with respect to and the subset of the state-action space , such that the probability of encountering a state-action pair that is not in while following in is small:

(11)
Lemma 1.

(Simulation Lemma) Suppose Assumption 4 holds with A4, then

(12)

where and .

Proof.
By the triangle inequality.
By (11).

We represent by and the transition matrices and rewards induced by the policy . For any matrix , we denote by the -induced matrix norm . Notice that:

Norm definition
Policy decomposition
Triangle inequality
By Definition 4

In addition, we use the following result (page 254 in bhatia1997matrix): For any two matrices and induced norm:

(13)

where . Since are stochastic, this inequality holds for the -induced norm with . Now:

Sum of rewards over steps
Hölder inequality and submultiplicative norm
Triangle inequality and bounded reward
Equation 13 for each summand with
Definition 4 as seen above

Therefore, we can combine the results to obtain:

(14)

b.2 Bounding the -error in Estimates of the Transition Probabilities

In this subsection, we consider the number of samples needed to estimate the transition probabilities of various realization-action pairs. The samples we receive are from a trajectory. Each trajectory is independent. Unfortunately, samples observed at timestep may depend on samples observed at previous timesteps. So the samples within a trajectory may not be independent. Therefore, we cannot apply the Weissman inequality (Weissman2003), which requires the samples to be independent and identically distributed. Instead, we derive a bound based on a martingale argument.

Definition 5.

A sequence of random variables

is a martingale provided that for all , we have

(15)
(16)
Theorem 2.

(Azuma’s inequality) Let and be a martingale such that for , then for all

(17)
Definition 6.

Let be any set of random variables with support in and is a function. A Doob martingale is the sequence

Lemma 2.

Let , be a finite set, be a collection of random variables with support in generated by an unknown process, and for all . We denote by for all . Then

(18)

for all and

(19)

where .

Proof.

First, notice that and define a Doob martingale such that for . By applying Azuma’s inequality, we obtain

which proves (18).

Now the union bound gives

which proves (19). ∎

Lemma 3.

Let , and , if there are

samples of the realization-action pair obtained from independent trajectories of , then

(20)

with probability at least