1 Introduction
Reinforcement Learning (RL) algorithms learn to maximize rewards by analyzing past experience with an unknown environment. Most RL algorithms assume that they can choose which actions to explore to learn quickly. However, this assumption leaves RL algorithms incompatible with many realworld business applications.
To understand why, consider the problem of online advertising: Each customer is successively presented with one of several advertisements. The advertiser’s goal is to maximize the probability that a user will click on an ad. This probability is called the Click Through Rate (CTR,
richardson2007predicting). A marketing strategy, called a policy, chooses which ads to display to each customer. However, testing new policies could lose money for the company. Therefore, management would not allow a new policy to be tested unless there is strong evidence that the policy is not worse than the company’s existing policy. In other words, we would like to estimate the CTR of other strategies using only data obtained from the company’s existing policy. In general, the problem of determining a policy’s value from data generated by another policy is called
offpolicy evaluation, where the policy that generates the data is called the behavior policy, and the policy we are trying to evaluate is called the target policy. This problem may be the primary reason batch RL algorithms are hardly used in applications, despite the maturity of the field.A simple approach to offpolicy evaluation is given by the MFMC algorithm (Fonteneau2010), which constructs complete trajectories for the target policy by concatenating partial trajectories generated by the behavior policy. However, this approach may require a large number of samples to construct complete trajectories. One may think that the number of samples is of little importance, since Internet technology companies have access to millions or billions of transactions. Unfortunately, the dimensionality of realworld problems is generally large (e.g., thousands or millions of dimensions) and the events they want to predict can have extremely small probability of occurring. Thus, sample efficient offpolicy evaluation is paramount.
An alternative way of looking at the problem is through counterfactual (CF) analysis (Bottou2013). Given the outcome of an experiment, CF analysis is a framework for reasoning about what would have happened if some aspect of the experiment was different. In this paper, we focus on the question: what would have been the expected reward received for executing the target policy rather than the behavior policy? One approach that falls naturally into the CF framework is Importance Sampling (IS) (Bottou2013; Li2014)
. IS methods evaluate the target policy by weighting rewards received by the behavior policy. The weights are determined by the probability that the target policy would perform the same action as the one prescribed by the behavior policy. Unfortunately, IS methods suffer from high variance and typically assume that the behavior policy visits every state that the target policy visits with nonzero probability.
Even if this assumption holds, IS methods are not able to exploit structure in the environment because their estimators do not create a compact model of the environment. Exploiting this structure could drastically improve the quality of offpolicy evaluation with small sample sizes (relative to the dimension of the statespace). Indeed, there is broad empirical support that modelbased methods are more sample efficient than modelfree methods (Hester2009; Jong2007)
. However, one broad class of compact models are Factoredstate Markov Decision Processes (FMDPs,
kearns1999efficient; Strehl2007; chakraborty2011structure). An FMDP model can often be learned with a number of samples logarithmic in the total number of states, if the structure is known. Unfortunately, inferring the structure of an FMDP is generally computationally intractable for FMDPs with highdimensional statespaces (chakraborty2011structure), and in realworld problems the structure is rarely known in advance.Ideally, we would like to apply modelbased methods to offpolicy evaluation because they are generally more sample efficient than modelfree methods such as MFMC and IS. In addition, we want to use algorithms that are computationally tractable. To this end, we introduce GSCOPE, which learns the structure of an FMDP greedily. GSCOPE is both sample efficient and computationally scalable. Although GSCOPE does not always learn the true structure, we provide theoretical analysis relating the number of samples to the error in evaluating the target policy. Furthermore, our experimental analysis demonstrates that GSCOPE is significantly more sample efficient than modelfree methods.
The main contributions of this paper are:

a novel, scalable method for offpolicy evaluation that exploits unknown structure,

a finite sample analysis of this method, and

a demonstration through experiments that this approach is sample efficient.
The paper is organized as follows. In Section 2, we describe the problem setting and notations. Section 3 elaborates on our greedy structure learning algorithm. Our main theorem and its analysis are given in Section 4. Section 5 presents experiments. In Section 6, we discuss limitations of GSCOPE and future research directions.
2 Background
We consider dynamics that can be represented by a Markov Decision Process (MDPs; puterman2009markov):
Definition 1.
A Markov Decision Process (MDP) is a tuple where is the state space, is the action space, represents the transition probabilities from every stateaction pair to another state, represents the reward function fitting each stateaction pair with a random real number, and is a distribution over the initial state of the process.
We denote by a Markov policy that maps states to a distribution over actions. The process horizon is , and applying a policy for steps starting from results in a cumulative reward known as the value function: , where the expectation is taken with respect to and . We assume is known and immediate rewards are bounded in .
The system dynamics is as follows: First, an initial state is sampled from . Then, for each time step , an action is sampled according to the policy , a reward is awarded according to and the next state is sampled by . The quantity of interest is the expected policy value .
2.1 OffPolicy Evaluation
We consider the finite horizon batch setup. Given are trajectories of length sampled from an MDP with an initial state distribution and behavior policy . The offpolicy evaluation problem is to estimate the step value of a target policy (different from ). For the target policy , we aim to minimize the difference between the true and estimated policy value:
(1) 
2.2 Factored MDPs
Suppose the state space can be decomposed into discrete values. We denote the variable of by , and for a given subset of indices , let be the subset of corresponding variables . We define a factored MDP, similar to guestrin2003efficient:
Definition 2.
A Factored MDP (FMDP) is an MDP such that the state is composed of a set of variables , where each variable can take values from a finite domain, such that the probability of the next state given that action is performed in state satisfies
(2) 
For simplicity, we assume that all variables lie in the same domain , i.e., , where is a finite set. Furthermore, each variable in the next state only depends on a subset of variables where . The indices in are called the parents of . When the size of the parent sets are smaller than , then the FMDP can be represented more compactly:
(3) 
Before delving into the algorithm and the analysis, we provide some notation. For a subset of indices , a realizationaction pair is a specific instantiation of values for the corresponding variables . We denote by the set of all realizationaction pairs for the parents of node , and mark .
The following quantities are used in the algorithm and consecutive analysis: Denote by a subset of indices and by a realization of the corresponding variables:
(4) 
where the probabilities in the right term of the first equation are conditioned on the behavior policy omitted for brevity. Note that if then , and the policy dependency cancels out.
2.3 Previous Work
Previous works on FMDPs focus on finding the optimal policy. Early works assumed the dependency structure is known (guestrin2002algorithm; kearns1999efficient). degris2006learning proposed a general framework for iteratively learning the dependency structure (this work falls within this framework), yet no theoretical results were presented for their approach. SLFRmax (Strehl2007), MetRmax (diuk2009adaptive) and LSERmax (chakraborty2011structure) are algorithms for learning the complete structure. Only the first two require as input the indegree of the DBN structure. The sample complexity of these algorithms is exponential in the number of parents. Finally, learning the structure of DBNs with no related reward is in itself an active research topic (Friedman:1998aa; Trabelsi:2013aa).
There has also been increasing interest in the RL community regarding the topic of offpolicy evaluation. Works focusing on modelbased approaches mainly provide bounds on the value function estimation error. For example, the simulation lemma (Kearns2002a) can be used to provide sample complexity bounds on such errors. On the other hand, model free approaches suggest estimators while trying to reduce the bias. precup2000eligibility presents several methods based on applying importance sampling on eligibility traces, along with an empirical comparison; Theocharous2015offPolicyConfidence had analyzed bounds on the estimation error for this method. A different approach was suggested by Fonteneau2010: evaluate the policy by generating artificial trajectories  a concatenation of onestep transitions from observed trajectories. The main problem of these approaches besides the computational cost is that a substantial amount of data required to generate reasonable artificial trajectories.
3 Algorithm
In general, inferring the structure of an FMDP is exponential in (Strehl2007). Instead, we propose a naive greedy algorithm which under some assumptions can be shown to provide small estimation error on the transition function (GSCOPE  Algorithm 1).
GSCOPE (Greedy Structure learning of faCtored MDPs for OffPolicy Evaluation) receives offline batch data, two confidence parameters , and a minimum acceptable score . The outputs are the estimated parents of each variable . In the inner loop, the set is defined as the set of all realizationaction pairs which had been observed at least times; These are the only pairs further considered. We then greedily add to the ’th variable which maximizes the difference between the old distribution depending only on , and a distribution conditioned on the additional variable as well. Parents are no longer added when that difference is small, or when all possible realizations were not observed times. The computational complexity of a naive implementation is , since GSCOPE sweeps the data for every input and output variable.
The main idea beyond GSCOPE is that having enough samples will result in an adequate estimate of the conditional probabilities. Then, under appropriate regularity assumptions (stated in Section 4), adding a non parent variable is unlikely. If parents have a higher effect than nonparents on the distance and nonparents have a weak effect, the procedure will most likely return only parents. When all prominent parents were found, or when there is not enough data for further inference, the algorithm stops. Once the set of assumed parents is available, we can build an estimated model and simulate any policy.
An important property of the GSCOPE algorithm is that it does not necessarily find the actual parents. Instead, we settle on finding a subset of variables providing probably approximately correct transition probabilities. As a result, the number of considered parents scales with data available, a desired quality linking the model and sample complexity. Since we do not necessarily detect all parents, nonparents can have a nonzero influence on the target variable after all prominent parents have been detected. To avoid including these nonparents, the threshold to add a parent is plus some precision parameters. In practice, we use because including nonparents with an indirect influence on may improve the quality of the model. However, in our analysis, we present Assumptions under which the true parents can be learned and explain .
Finally, GSCOPE
can be modified to encode and construct the conditional probability distributions using decision trees. A different decision tree is constructed for each action and variable in the next state. Tree based models can produce more compact representations of the model than encoding the full conditional probability tables specified by
. While we analyze GSCOPE as an algorithm that separates structure learning from estimating the conditional probability tables, for simplicity and clarity, in our experiments, we actually use a decision tree based algorithm. The modifications to the analysis for the tree based algorithm would add unnecessary complexity and distract from the key points of the analysis.4 Analysis
By using a scalable but greedy approach to structure learning rather than a combinatorially exhaustive one, GSCOPE can only learn arbitrarily well a subclass of models. In this section, we introduce three assumptions on the FMDP that describe this subclass, and then analyze the policy evaluation error for this subclass.
We divide into nonoverlapping “weak” () and “strong” () parents. These subsets will be defined formally later, but intuitively, parents in have a large influence on and are easy to detect while parents in have a smaller influence that may be below the empirical noise threshold and hence not be detected. Our assumptions state that (1) “strong” parents are sufficiently better than nonparents to be detected by GSCOPE before nonparents; (2) conditionally on “strong” parents, nonparent have too little influence on to be accepted by GSCOPE and (3) conditioning on some “weak” parents does not increase the influence of other “weak” parents. The first two assumptions are used to bound the probability that GSCOPE adds non parents in or does not add some strong parents, the last one to bound the error caused by the potential nondetection of weak parents.
Assumption 1.
Strong parent superiority. For every , there exists a “strong” subset of parents such that , , , there exists , such that for some ,
(5) 
Assumption 1 ensures that, in terms of influence on the conditional distribution of the target, GSCOPE finds at least one “strong” parent variable more attractive than any nonparent variable as long as . This prevents extreme cases where due to large correlation between parents and nonparents factors, large numbers of nonparents could be added before finding the actual parents, thus considerably increasing the sample complexity. quantifies how much more information a true parent will provide than nonparents. The larger the less likely GSCOPE will add a nonparent in .
Figure 1 illustrates a subset of the state variables and corresponding conditional transition probability distributions of an FMDP that, for the action implicitly considered, does not satisfy Assumption 1. In this setting, for and considering , we have
GSCOPE would add , a nonparent, before any true parent of in the estimated parent set. Note that in this particular case it does not matter, as perfectly determines . However, adding noise in the transition probabilities would make less accurate than and together.
Assumption 2.
Nonparent conditional weakness. For every , as in Assumption 1, , , for some ,
(6) 
Assumption 2 ensures that, after GSCOPE has detected all strong parents, nonparents have low influence on the target variable and therefore GSCOPE has a low probability to add them to . If , then .
Assumption 3.
If conditioning on provides more knowledge on the output distribution than conditioning on another variable , then it will also provide more knowledge than conditioning on given . In simple words, Assumption 3 means that information inferred from variables is monotonic, so influential parents cannot go undetected. This assumption supports our greedy scheme, but there are trivial cases where it does not hold.
Consider the substructure represented in Figure 2:
Even though are together very informative about variable , any single one of them is not. In such a situation, useful variables cannot be detected by a greedy scheme. Assumption 3 prevents this problem.
These assumptions form the core hardness of the structure learning problem. From one side, there may be implicit dependencies between variables induced by the dynamics  making it hard to separate nonparents. From the other side, the conditional probabilities may belong to a family of XOR like function  initially hiding attractive true parents. Finally, while these assumptions are crucial for proper analysis, nonparent variables may have a beneficial effect on the actual evaluation error as they still contain information on the true parents values, and subsequently information on the output variable.
Theorem 1.
The proof of Theorem 1 is divided in 4 parts, detailed in the supplementary material. First, we derive a simulation lemma for MDPs stating that for the target policy two MDPs with similar transition probability distributions have proximate value functions. We then consider the number of samples needed to estimate the transition probabilities of various realizationaction pairs. Samples within a trajectory may not be independent so we derive a bound based on Azuma’s inequality for martingales. Subsequently, we consider the number of trajectories needed to derive a model that evaluates the target policy accurately. If the behavior policy visits enough the parent realizations that the target policy is likely to visit, then the number of trajectories can be small. On the other hand, if the behavior never visits parent realizations that the target policy visits, then the number of trajectories may be infinite. This is captured by . Finally, we bound the error due to greedy parent selection under Assumptions 1, 2 and 3.
The evaluation error bound depends on the horizon , on the number of variables , on the error bound on most transition probability values of the FMDP constructed by GSCOPE and on the probability that a trajectory will not visit a state with badly estimated probability values. The dependency of on is the first advantage of the factorization. The constants , and , from Assumptions 1, 2 and 3, respectively, indicate the effect of the model “hardness” on the bound. When is large enough and , the true structure can be learned greedily and the error can be driven arbitrarily close to . In other cases, GSCOPE may learn the wrong structure resulting in some approximation error.
Next, observe the probability that the bounds in Theorem 1 hold. The multiplicative term is unavoidable since for each parents realization and action pair the estimation error on the transition probability must be bounded. The main advantage of this theorem is the lack of a multiplicative term, which means the effective state space decreased exponentially. The factor is due to the number of iterations of GSCOPE where a parent is added, and is due to bounds on nonparents that must be valid for all these iterations.
In , the values characterize the mismatch between the behavior policy and the target policy. If the behavior policy visits all of the parentaction realizations that the target policy visits with sufficiently high probability, then the parameters will be small. But if the target policy visits parentaction realizations that are never visited by the behavior policy, then the values may be infinite. The values are similar to importance sampling weights used by some modelfree offpolicy algorithms. However, unlike modelfree approaches that depend on the differences in the state visitation distributions of the behavior policy and the target policy, the values depend on the differences in the parent realization visitation distributions between the behavior policy and the target policy. This is more flexible because the values can be small even when the behavior policy and the target policy visit different regions of the statespace.
5 Experiments
We compared GSCOPE to other offpolicy evaluation algorithms in the Taxi domain (Dietterich1998), randomly generated FMDPs, and the Space Invaders domain (Bellemare2013). Since the domains compared in our experiments have different reward scales, we normalized the errors to compare . In all experiments, the behavior policy differs from the target policy. Furthermore, evaluation error always refers to the target policy’s evaluation error, and all trajectory data is generated by the behavior policy.
We compare GSCOPE to the following algorithms:

ModelFree MonteCarlo (MFMC, Fonteneau2010): a modelfree offpolicy evaluation algorithm that constructs artificial target policies by concatenating partial trajectories generated by the behavior policy,

Clipped Importance Sampling (CIS, Bottou2013
): a modelfree importance sampling algorithm that uses a heuristic approach to clip extremely large importance sampling ratios,

Flat : a flat modelbased approach that assumes no structure between any two stateaction pairs and simply builds an empirical next state distribution for each stateaction pair, and

Known Structure (KS) : a modelbased method that is given the true parents, but still needs to estimate the conditional probability tables from data generated by the behavior policy. KS should outperform GSCOPE, because KS knows the structure. We introduce KS to differentiate the evaluation error due to insufficient samples from the evaluation error due to GSCOPE selecting the wrong parent variables.
Our experimental results show that (1) modelbased offpolicy evaluation algorithms are more sample efficient than modelfree methods, (2) exploiting structure can dramatically improve sample efficiency, and (3) GSCOPE often provides a good evaluation of the target policy despite its greedy structure learning approach.
5.1 Taxi Domain
The objective in the Taxi domain (Dietterich1998) is for the agent to pickup a passenger from one location and to drop the passenger off at a destination. The state can be described by four variables. We selected the initial state according to a uniform random distribution and used a horizon . The behavior policy selected actions uniform randomly, while the target policy was derived by solving the Taxi domain with the Rmax algorithm (Brafman2002). We discovered that the deterministic policy returned by Rmax was problematic for CIS, because the probability of almost all trajectories generated by the behavior policy were 0 with respect to the target policy. To resolve this problem, we modified the policy returned by Rmax to ensure that every action is selected in every state with probability at least .
The Taxi domain is a useful benchmark because we know the true structure and the total number of states is only 500. Thus, we can compare GSCOPE to KS and Flat.
Figure 3 presents the normalized evaluation error (on a logscale) for MFMC, CIS, Flat, KS, and GSCOPE over 2,000 trajectories generated by the behavior policy. Median and quantiles are estimated over 40 independent trials. For intermediate and large number of trajectories, GSCOPE performs about the same as if the structure is given and achieves smaller error than the modelfree algorithms (MFMC and CIS). Notice that MFMC, CIS, and Flat, which do not take advantage of the domains structure, require a large number of trajectories before they achieve low evaluation error. Interestingly, the Flat (modelbased) approach appears to be more sample efficient than MFMC, which is in line with observations that modelbased RL is more efficient than modelfree RL (Hester2009; Jong2007). KS and GSCOPE, on the other hand, achieve low evaluation error after just a few trajectories and have similar performance, except for very few trajectories where GSCOPE can adapt the model complexity to the number of samples and therefore achieves a lower evaluation error than the algorithm knowing the structure. This provides one example where greedy structure learning is effective.
5.2 Randomly Generated Factored Domains
To test GSCOPE in a higher dimensional problem, where we still know the true structure, we randomly generated FMDPs with dimensional states. The domain of each variable was . For each state variable the number of parents was uniformly selected from 1 to 4 and the parents were also chosen randomly. Afterwards, the conditional probability tables were filled in uniformly and normalized to ensure they specified proper probability distributions. The FMDP was given a sparse reward function that returned
if and only if the last bit in the statevector was
and returned otherwise. We used a horizon . The behavior policy selected actions uniform randomly, while the target policy was derived by running SARSA(Sutton1998) with linear value function approximation on the FMDP for 5,000 episodes with a learning rate , discount factor , and epsilongreedy parameter . After training SARSA, we extracted a stationary target policy. As in the Taxi domain, we modified the policy returned by SARSA to ensure that every action could be selected in every state with probability at least .For the randomly generated FMDPs, we could not construct a flat model because there are states and the number of parameters in a flat model scales quadratically with the size of the statespace. However, we could still compare MFMC, CIS, KS, and GSCOPE.
Figure 4 presents the normalized evaluation error (on a logscale) for MFMC, CIS, KS, and GSCOPE given and
trajectories from the behavior policy. Average and standard deviations are estimated over 10 independent trials.
MFMC fails because in this highdimensional task there is not enough data to construct artificial trajectories for the target policy. CIS fairs only slightly better than MFMC, because it uses all of the trajectory data. Unfortunately, most of the trajectories generated by the behavior policy are not probable under the target policy and its evaluation of the target policy is pessimistic. GSCOPE has slightly worse performance than KS, but GSCOPE achieves significantly lower evaluation error than MFMC and CIS.5.3 Space Invaders
In the Space Invaders (SI) domain using the Arcade Learning Environment (Bellemare2013), not only do we not know the parent structure, we also cannot verify that the factored dynamics assumption even holds (2). Thus, SI presents a challenging benchmark for offpolicy evaluation. We used the bit RAM as the state vector. We set the horizon so that the behavior policy would experience a diverse set of states.
As in the previous experiment, the behavior policy selected actions uniformly at random, while the target policy was derived by running SARSA (Sutton1998) with linear value function approximation on the FMDP with a learning rate , discount factor , and epsilongreedy parameter . We only trained SARSA for 500 episodes, because of the time required to sample an episode. After training, we extracted a stationary target policy, which ensured all actions could be selected in all states with probability at least .
Figure 5 shows the normalized evaluation error for MFMC, CIS, and GSCOPE given and trajectories from the behavior policy. Averages and standard deviations are estimated over 5 independent trials. Again, the evaluation error of GSCOPE is much smaller than MFMC and CIS. In fact, MFMC and CIS perform no better than a strategy that always predicts the target policy’s value is . The poor performance of MFMC is due to the impossibility to construct artificial trajectories from samples in such a high dimensional space.
6 Discussion
We presented a finite sample analysis of GSCOPE that shows how samples can be related to the evaluation error. When , the sample complexity scales logarithmically with number of states, where .
Our experiments show that (1) modelbased offpolicy evaluation algorithms are more sample efficient than modelfree methods, (2) exploiting structure can dramatically improve sample efficiency, and (3) GSCOPE often provides a good evaluation of the target policy despite using a greedy structure learning approach. Thus, GSCOPE provides a practical solution for evaluating new policies. Our empirical evaluation on large and small FMDPs shows our approach outperforms existing methods, which only exploit trajectories.
We analyzed GSCOPE under three assumptions restricting the class of FMDPs that can be considered. These three assumptions imply that (1) including weak parent will not make any other weak parent (significantly) more informative than it was before, (2) strong parents are more relevant than nonparents, and (3) conditioned on the strong parents nonparents are noninformative. We believe that many realworld problems approximately satisfy these assumptions. If the problem under consideration does not satisfy them, then learning algorithms of combinatorial computational complexity in the number of state variables must be considered to correctly identify the true parents (chakraborty2011structure).
To the best of our knowledge, this is the first modelbased algorithm and analysis for offpolicy evaluation in FMDPs. Moreover, GSCOPE is a tractable algorithm for learning the structure of an FMDP even if no prior knowledge is given about the order in which variables should be considered. That being said, we hope that showing the effectiveness of structure learning for offpolicy evaluation will encourage the adaptation of existing algorithms for learning the structure of FMDPs and more generally dynamic Bayesian networks for offpolicy evaluation.
References
Appendix A List of Notations
Notation  Meaning 

Action space  
Time horizon  
Time index  
Number of trajectories in batch data  
Number of factors in each state  
The set .  
Domain of each factor in a state and (dual notation) the number of possible values for the factor  
Markov Decision Process  
Distribution of first state in MDP  
Input variable (represents previous state)  
Output variable (represents next state)  
The ’th variable in the output.  
A subset of indices  
The corresponding subset of variables to  
Indices of the parents for variable  
, the set of all realizationaction pairs for the parents of node  
Indices found by GSCOPE for variable  
Number of observations in the data fitting the instance  
The set of realizationaction pairs observed more than for each  
A value signifying policies mismatch (bigger means higher mismatch) 
Appendix B Proof of Main Theorem & Supporting Lemmas
The proof of Theorem 1 is broken down into parts.
b.1 The Simulation Lemma
In this subsection, we derive a simulation lemma for MDPs, which essentially says that for a fixed policy two MDPs with similar transition probability distributions will result in similar value functions. Our simulation lemma differs from other simulation lemmas (e.g., Kearns2002a; Kakade2003) in that we only need the guarantee to hold for the target policy. To formalize what we mean by “similar” MDPs, we introduce the following assumption.
Definition 3.
Let be an MDP and . and define an induced MDP , where
and
Definition 4.
Let , be an MDP, and . An induced MDP with respect to and , satisfies
Assumption 4.
A4 : Let , , be a policy, and . There exists an induced MDP with respect to and the subset of the stateaction space , such that the probability of encountering a stateaction pair that is not in while following in is small:
(11) 
Proof.
By the triangle inequality.  
By (11). 
We represent by and the transition matrices and rewards induced by the policy . For any matrix , we denote by the induced matrix norm . Notice that:
Norm definition  
Policy decomposition  
Triangle inequality  
By Definition 4 
In addition, we use the following result (page 254 in bhatia1997matrix): For any two matrices and induced norm:
(13) 
where . Since are stochastic, this inequality holds for the induced norm with . Now:
Therefore, we can combine the results to obtain:
(14) 
∎
b.2 Bounding the error in Estimates of the Transition Probabilities
In this subsection, we consider the number of samples needed to estimate the transition probabilities of various realizationaction pairs. The samples we receive are from a trajectory. Each trajectory is independent. Unfortunately, samples observed at timestep may depend on samples observed at previous timesteps. So the samples within a trajectory may not be independent. Therefore, we cannot apply the Weissman inequality (Weissman2003), which requires the samples to be independent and identically distributed. Instead, we derive a bound based on a martingale argument.
Definition 5.
Theorem 2.
(Azuma’s inequality) Let and be a martingale such that for , then for all
(17) 
Definition 6.
Let be any set of random variables with support in and is a function. A Doob martingale is the sequence
Lemma 2.
Let , be a finite set, be a collection of random variables with support in generated by an unknown process, and for all . We denote by for all . Then
(18) 
for all and
(19) 
where .
Proof.
First, notice that and define a Doob martingale such that for . By applying Azuma’s inequality, we obtain
which proves (18).
Lemma 3.
Let , and , if there are
samples of the realizationaction pair obtained from independent trajectories of , then
(20) 
with probability at least
Comments
There are no comments yet.