1 Introduction
One of the hot topics of modern Artificial Intelligence (AI) is the ability for an agent to adapt its behaviour to changing tasks. In the literature, this problem is often linked to the setting of Lifelong Reinforcement Learning (LRL)
(Silver et al., 2013; Abel et al., 2018a, b) and learning in nonstationary environments (Choi et al., 1999; Jaulmes et al., 2005; Hadoux, 2015). In LRL, the tasks presented to the agent change sequentially at discrete transition epochs (Silver et al., 2013). Similarly, the nonstationary environments considered in the literature often evolve abruptly (Hadoux, 2015; Hadoux et al., 2014; Doya et al., 2002; Da Silva et al., 2006; Choi et al., 1999, 2000, 2001; Campo et al., 1991; Wiering, 2001). In this paper, we investigate environments continuously changing over time that we call es (s). In this setting, it is realistic to bound the evolution rate of the environment using a Lipschitz Continuity (LC) assumption.Modelbased Reinforcement Learning approaches (Sutton et al., 1998) benefit from the knowledge of a model allowing them to reach impressive performances, as demonstrated by the Monte Carlo Tree Search (MCTS) algorithm (Silver et al., 2016). In this matter, the necessity to have access to a model is a great concern of AI (Asadi et al., 2018; Jaulmes et al., 2005; Doya et al., 2002; Da Silva et al., 2006). In the context of s, we assume that an agent is provided with a snapshot model when its action is computed. By this, we mean that it only has access to the current model of the environment but not its future evolution, as if it took a photograph but would be unable to predict how it is going to evolve. This hypothesis is realistic, because many environments have a tractable state while their future evolution is hard to predict (Da Silva et al., 2006; Wiering, 2001). In order to solve LCs, we propose a method that considers the worstcase possible evolution of the model and performs planning this model. This is equivalent to considering Nature as an adversarial agent. The paper is organized as follows: first we describe the setting and the regularity assumption (Section 2); then we outline related works (Section 3); follows the explanation of the worstcase approach proposed in this paper (Section 4); then we describe an algorithm reflecting this approach (Section 5); finally we illustrate its behaviour empirically (Section 6).
2 es
To define a (), we revert to the initial MDP model introduced by Puterman (2014), where the transition and reward functions depend on time.
Definition 1.
. An is an MDP whose transition and reward functions depend on the decision epoch. It is defined by a 5tuple where is a state space; is the set of decision epochs with ; is an action space;
is the probability of reaching state
while performing action at decision epoch in state ; is the scalar reward associated to the transition from to with action at decision epoch .This definition can be viewed as that of a stationary MDP whose state space has been enhanced with time. While this addition is trivial in episodic tasks where an agent is given the opportunity to interact several times with the same MDP, it is different when the experience is unique. Indeed, no exploration is allowed along the temporal axis. Within a stationary, infinitehorizon MDP with a discounted criterion, it is proven that there exists a Markovian deterministic stationary policy (Puterman, 2014). It is not the case within s where the optimal policy is nonstationary in the most general case. Additionally, we define the expected reward received when taking action at state and decision epoch as . Without loss of generality, we assume the reward function to be bounded between and . In this paper, we consider discrete time decision processes with constant transition durations, which imply deterministic decision times in Definition 1. This assumption is mild since many discrete time sequential decision problems follow that assumption. A nonstationary policy is a sequence of decision rules which map states to actions (or distributions over actions). For a stochastic nonstationary policy , the value of a state at decision epoch within an infinite horizon is defined, with a discount factor, by:
The definition of the stateaction value function for at decision epoch is straightforward:
Overall, we defined an as an MDP where we stress out the distinction between state, time, and decision epoch due to the inability for an agent to explore the temporal axis at will. This distinction is particularly relevant for nonepisodic tasks, when there is no possibility to reexperience the same MDP starting from a prior date.
The regularity hypothesis. Many realworld problems can be modelled as an . For instance, the problem of path planning for a glider immersed in a nonstationary atmosphere (Chung et al., 2015; Lecarpentier et al., 2017), or that of vehicle routing in dynamic traffic congestion. Realistically, we consider that the expected reward and transition functions do not evolve arbitrarily fast over time. Conversely, if such an assumption was not made, a chaotic evolution of the would be allowed which is both unrealistic and hard to solve. Hence, we assume that changes occur slowly over time. Mathematically, we formalize this hypothesis by bounding the evolution rate of the transition and expected reward functions, using the notion of Lipschitz Continuity (LC).
Definition 2.
Lipschitz Continuity. Let and be two metric spaces and , is Lipschitz Continuous (LC) with iff . is called a Lipschitz constant of the function .
We apply this hypothesis to the transition and reward functions of an so that those functions are LC time. For the transition function, this leads to the consideration of a metric between probability density functions. For that purpose, we use the 1Wasserstein distance
(Villani, 2008).Definition 3.
1Wasserstein distance. Let be a Polish metric space, any probability measures on ,
the set of joint distributions on
with marginals and . The 1Wasserstein distance between and is .The choice of the Wasserstein distance is motivated by the fact that it quantifies the distance between two distributions in a physical manner, respectful of the topology of the measured space (Dabney et al., 2018; Asadi et al., 2018)
. First, it is sensitive to the difference between the supports of the distributions. Comparatively, the KullbackLeibler divergence between distributions with disjoint supports is infinite. Secondly, if one consider two regions of the support where two distributions differ, the Wasserstein distance is sensitive to the distance between the elements of those regions. Comparatively, the totalvariation metric is the same regardless of this distance.
Definition 4.
LC. An LC is an whose transition and reward functions are respectively LC and LC time, , ,
One should remark that the LC property should be defined with respect to actual decision times and not decision epoch indexes for the sake of realism. In the present case, both have the same value, and we choose to keep this convention for clarity. Our results however extend easily to the case where indexes and times do not coincide. From now on, we consider LCs, making Lipschitz Continuity our regularity property. Notice that is defined as a convex combination of by the probability measure . As a result, the notion of Lipschitz Continuity of is strongly related to that of and as showed by Property 1. All the proofs of the paper can be found in the Appendix.
Property 1.
Given an LC, the expected reward function is LC with .
This result shows ’s evolution rate is conditioned by the evolution rates of and . It allows to work either with the reward function or its expectation , benefiting from the same LC property.
3 Related work
A close work to our approach was done by Iyengar (2005), extending Dynamic Programming (DP, Bellman (1957)) to the search of an optimal robust policy given sets of possible transition functions. It differs from our work in two fundamental aspects: 1) we consider uncertainty in the reward model as well; 2) we use a stronger Lipschitz formulation on the set of possible transition and reward functions, this last point being motivated by its relevance to the nonstationary setting. Further, we propose an online treesearch algorithm, differing from DP in terms of applicability. Szita et al. (2002) proposed a setting where the transition function of an MDP is allowed to change between decision epochs. Similarly to our Lipschitz hypothesis in the Wasserstein metric, they control the total variation distance of subsequent transition functions by a scalar value. Those slowly changing environments allow modelfree RL algorithms such as QLearning to find near optimal policies. Conversely, EvenDar et al. (2009) studied the case of nonstationary reward functions with fixed transition models. No assumption is made on the possible reward functions and they propose an algorithm achieving sublinear regret with respect to the best stationary policy. Dick et al. (2014) viewed a similar setting from the perspective of online linear optimization. Finally, Csáji and Monostori (2008) studied the case of both varying reward and transition functions within a neighbourhood of a rewardtransition function pair. They study the convergence of general stochastic iterative algorithms classical RL algorithms such as asynchronous DP, Qlearning and temporal difference learning.
Nonstationary MDPs have been extensively studied. A very common framework is probably that of s (Hidden Mode MDPs) introduced in (Choi et al., 1999). This is a special class of Partially Observable MDPs (POMDPs (Kaelbling et al., 1998)) where a hidden mode indexes a latent stationary MDP within which the agent evolves. This way, similarly to the context of LRL, the agent experiences a series of different MDPs over time. In this setting, Choi et al. (1999, 2000) proposed methods to learn the different models of the latent stationary MDPs. Doya et al. (2002) built a modular architecture switching between models and policies when a change is detected. Similarly, Wiering (2001); Da Silva et al. (2006); Hadoux et al. (2014) proposed a method tracking the switching occurrence and replanning if needed. Overall, as in LRL, the setting considers abrupt evolution of the transition and reward functions whereas we consider a continuous one. Other settings have been considered, as by Jaulmes et al. (2005), who do not make particular hypothesis on the evolution of the .They build a learning algorithm for POMDPs solving where time dependency is taken into account by weighting recently experienced transitions more than older ones.
To plan efficiently within an , our approach consists in taking advantage of the slow LC evolution of the environment in order to plan according to the worstcase. Generally, taking advantage of Lipschitz continuity to infer bounds on the value of a function within a certain neighbourhood is a widely used tool in the RL, bandit and optimization communities (Kleinberg et al., 2008; Rachelson and Lagoudakis, 2010; Pirotta et al., 2015; Pazis and Parr, 2013; Munos, 2014). We implement this approach with a Minimaxlike algorithm (Fudenberg and Tirole, 1991), where the environment is seen as an adversarial agent, which, to the best of our knowledge, is a novel perspective.
4 Worstcase approach
We consider finding an optimal policy within an LC under the nonepisodic task hypothesis. The latter prevents us from learning from previous experience data since they become outdated with time and no information samples have been collected yet for future time steps. An alternative is to use modelbased RL algorithms such as MCTS. For a current state , such algorithms focus on finding the optimal action by using a generative model. This action is then undertaken and the operation repeated at the next state. However, using the true model for this purpose is an unrealistic hypothesis, since this model is generally unknown. We assume the agent does not have access to the true model; instead, we introduce the notion of snapshot model.Intuitively, the snapshot associated to time is a temporal slice of the at .
Definition 5.
Snapshot of an . The snapshot of an at decision epoch , denoted by MDP, is the stationary MDP defined by the 4tuple where and are the transition and reward functions of the at .
Similarly to the , this definition induces the existence of the snapshot expected reward defined by . Notice that the snapshot MDP is stationary and coincides with the only at . Particularly, one can generate a trajectory within an using the sequence of snapshots as a model. Overall, the hypothesis of using snapshot models amounts to considering a planning agent only able to get the current stationary model of the environment. In realworld problems, predictions often are uncertain or hard to perform in the thermal soaring problem of a glider.
We consider a generic planning agent at , using MDP as a model of the . By planning, we mean conducting a lookahead search within the possible trajectories starting from given a model of the environment. The search allows in turn to identify an optimal action the model. This action is then undertaken and the agent jumps to the next state where the operation is repeated. The consequence of planning with MDP
is that the estimated value of an
pair is the value of the optimal policy of , written . The true optimal value of at within the does not match this estimate because of the nonstationarity. The intuition we develop is that, given the slow evolution rate of the environment, for a state seen at a future decision epoch during the search, we can predict a scope into which the transition and reward functions at lie.Property 2.
Set of admissible snapshot models. Consider an LC, . The transition and expected reward functions of the snapshot respect
where and denotes the ball of centre , defined with metric and radius .
For a future prediction at , we consider the question of using a better model than . The underlying evolution of the being unknown, a desirable feature would be to use a model leading to a policy that is robust to every possible evolution. To that end, we propose to use the snapshots corresponding to the worst possible evolution scenario under the constraints of Property 2. We claim that such a practice is an efficient way to 1) ensure robust performance to all possible evolutions of the and 2) avoid catastrophic terminal states. Practically, this boils down to using a different value estimate for at than which provided no robustness guarantees.
Given a policy and a decision epoch , a worstcase corresponds to a sequence of transition and reward models minimizing the expected value of applying in any pair , while remaining within the bounds of Property 2. We write this value for at decision epoch .
(1) 
Intuitively, the worstcase is a model of a nonstationary environment leading to the poorest possible performance for , while being an admissible evolution of MDP. Let us define as the worstcase value for the pair at decision epoch :
(2) 
5 RiskAverse TreeSearch algorithm
The algorithm. Tree search algorithms within MDPs have been well studied and cover two classes of search trees, namely closed loop (Keller and Helmert, 2013; Kocsis and Szepesvári, 2006; Browne et al., 2012) and open loop (Bubeck and Munos, 2010; Lecarpentier et al., 2018). Following (Keller and Helmert, 2013), we consider closed loop search trees, composed of decision nodes alternating with chance nodes. We adapt their formulation to take time into account, resulting in the following definitions. A decision node at depth , denoted by , is labelled by a unique state / decision epoch pair . The edges leading to its children chance nodes correspond to the available actions at . A chance node, denoted by , is labelled by a state / decision epoch / action triplet . The edges leading to its children decision nodes correspond to the reachable state / decision epoch pairs after performing in as illustrated by Figure 1.

We consider the problem of estimating the optimal action at within a worstcase , knowing MDP. This problem is twofold. It requires 1) to estimate the worstcase given MDP and 2) to explore the latter in order to identify . We propose to tackle both problems with an algorithm inspired by the minimax algorithm (Fudenberg and Tirole, 1991) where the max operator corresponds to the agent’s policy, seeking to maximize the return; and the min operator corresponds to the worstcase model, seeking to minimize the return. Estimating the worstcase requires to estimate the sequence of subsequent snapshots minimizing Equation 2. The interdependence of those snapshots (Equation 1) makes the problem hard to solve (Iyengar, 2005), particularly because of the combinatorial nature of the opponent’s action space. Instead, we propose to solve a relaxation of this problem, by considering snapshots only constrained by MDP. Making this approximation leaves a possibility to violate property 2 but allows for an efficient search within the developed tree and (as will be shown experimentally) leads to robust policies. For that purpose, we define the set of admissible snapshot models MDP by . The relaxed analogues of Equations 1 and 2 for are defined as follows:
Their optimal counterparts, while seeking to find the optimal policy, verify the following equations:
(3)  
(4) 
We now provide a method to calculate those quantities within the nodes of the tree search algorithm.
Max nodes.
A decision node corresponds to a max node due to the greediness of the agent the subsequent values of the children.
We aim at maximizing the return while retaining a riskaverse behaviour.
As a result, the value of follows Equation 3 and is defined as:
(5) 
Min nodes. A chance node corresponds to a min node due to the use of a worstcase as a model which minimizes the value of the reward and the subsequent values of its children. Writing the value of as the value of , within the worstcase snapshot minimizing Equation 4, and using the children’s values as values for the next reachable states, leads to Equation 6.
(6) 
Our approach considers the environment as an adversarial agent, as in an asymmetric twoplayer game, in order to search for a robust plan. The resulting algorithm, RATS for RiskAverse TreeSearch, is described in Algorithm LABEL:alg:rats. Given an initial state / decision epoch pair, a minimax tree is built using the snapshot MDP and the operators corresponding to Equations 5 and 6 in order to estimate the worstcase snapshots at each depth. The tree is built, the action leading to the best possible value from the root node is selected and a real transition is performed. The next state is then reached, the new snapshot model MDP is acquired and the process restarts. Notice the use of and in the pseudocode: they are light notations respectively standing for corresponding to a chance node and the probability to jump to a decision node given a chance node . algocf[t] The tree built by RATS is entirely developed until the maximum depth
. A heuristic function is used to evaluate the leaf nodes of the tree.
Analysis of RATS. We are interested in characterizing Algorithm LABEL:alg:rats without function approximation and therefore will consider finite, countable, sets. We now detail the computation of the min operator (Property 3), the computational complexity of RATS (Property 4) and the heuristic function.
Property 3.
Closedform expression of the worst case snapshot of a chance node. Following Algorithm LABEL:alg:rats, a solution to Equation 6 is given by:
with with at position , and otherwise.
Property 4.
Computational complexity. The total computation complexity of Algorithm LABEL:alg:rats is with the number of time steps and the maximum depth of the tree.
Heuristic function. As in vanilla minimax algorithms, Algorithm LABEL:alg:rats bootstraps the values of the leaf nodes with a heuristic function if these leaves do not correspond to terminal states. Given such a leaf node , a heuristic aims at estimating the value of the optimal policy at within the worstcase , . Let be such a heuristic function, we call heuristic error in the difference between and . Assuming that the heuristic error is uniformly bounded, the following property provides an upper bound on the propagated error due to the choice of .
Property 5.
Upper bound on the propagated heuristic error within RATS. Consider an agent executing Algorithm LABEL:alg:rats at with a heuristic function . We note the set of all leaf nodes. Suppose that the heuristic error is uniformly bounded, . Then we have for every decision and chance nodes and , at any depth :
This last result implies that with any heuristic function inducing a uniform heuristic error, the propagated error at the root of the tree is guaranteed to be upper bounded by . In particular, since the reward function is bounded by hypothesis, we have . Thus, selecting for instance the zero function ensures a root node heuristic error of at most . In order to improve the precision of the algorithm, we propose to guide the heuristic by using a function reflecting better the value of state at leaf node . The ideal function would of course be , reducing the heuristic error to zero, but this is intractable. Instead, we suggest to use the value of within the snapshot MDP using an evaluation policy , . This snapshot is also not available, but Property 6 provides a range wherein this value lies.
Property 6.
Bounds on the snapshots values. Let , a stationary policy, MDP and MDP two snapshot MDPs, be. We note the value of within MDP following . Then,
Since is available, can be estimated, via MonteCarlo rollouts. Let denote such an estimate. Following Property 6, . Hence, a worstcase heuristic on is . The bounds provided by Property 5 decrease quickly with , and given that is large enough, RATS provides the optimal riskaverse maximizing the worstcase value for any evolution of the .
6 Experiments
We compare the RATS algorithm with two policies ^{1}^{1}1 Link to the code will be provided. For ML reproducibility checklist informations, see Appendix Section LABEL:sec:reproducibilitychecklist. . The first one, named DPsnapshot, uses Dynamic Programming to compute the optimal actions the snapshot models at each decision epoch. The second one, named DPNSMDP, uses the real NSMDP as a model to provide its optimal action. The latter behaves as an omniscient agent and should be seen as an upper bound on the performance. We choose a particular gridworld domain coined “NonStationary bridge” illustrated in Appendix, Section LABEL:sec:nsbridgefigure. An agent starts at the state labelled S in the centre and the goal is to reach one of the two terminal states labelled G where a reward of +1 is received. The grey cells represent holes that are terminal states where a reward of 1 is received. Reaching the goal on the right leads to the highest payoff since it is closest to the initial state and a discount factor is applied. The actions are . The transition function is stochastic and nonstationary. At decision epoch , any action deterministically yields the intuitive outcome. With time, when applying Left or Right, the probability to reach the positions usually stemming from Up and Down increases symmetrically until reaching 0.45. We set the Lipschitz constant . Aside, we introduce a parameter controlling the behaviour of the environment. If , only the lefthand side bridge becomes slippery with time. It reflects a close to worstcase evolution for a policy aiming to the lefthand side goal. If , only the righthand side bridge becomes slippery with time. It reflects a close to worstcase evolution for a policy aiming to the righthand side goal. In between, the misstep probability is proportionally balanced between left and right. One should note that changing from 0 to 1 does not cover all the possible evolutions from MDP but provides a concrete, graphical illustration of RATS’s behaviour for various possible evolutions of the .
We tested RATS with so that leaf nodes in the search tree are terminal states. Hence, the optimal riskaverse policy is applied and no heuristic approximation is made. Our goal is to demonstrate that planning in this worstcase allows to minimize the loss given any possible evolution of the environment. To illustrate this, we report results reflecting different evolutions of the same using the factor. It should be noted that, at , RATS always moves to the left, even if the goal is further, since going to the right may be risky if the probabilities to go Up and Down increase. This corresponds to the careful, riskaverse, behaviour. Conversely, DPsnapshot always moves to the right since does not capture this risk. As a result, the case reflects a favorable evolution for DPsnapshot and a bad one for RATS. The opposite occurs with where the cautious behavior dominates over the risky one, and the inbetween cases mitigate this effect.
In Figure 3(a), we display the achieved expected return for each algorithm as a function of , as a function of the possible evolutions of the . As expected, the performance of DPsnapshot strongly depends on this evolution. It achieves high return for and low return for . Conversely, the performance of RATS varies less across the different values of . The effect illustrated here is that RATS maximizes the minimal possible return given any evolution of the . It provides the guarantee to achieve the best return in the worstcase. This behaviour is highly desirable when one requires robust performance guarantees as, for instance, in critical certification processes.
Figure 3(b) displays the return distributions of the three algorithms for . The effect seen here is the tendency for RATS to diminish the left tail of the distribution corresponding to low returns for each evolution. It corresponds to the optimized criteria, robustly maximizing the worstcase value. A common risk measure is the Conditional Value at Risk (CVaR) defined as the expected return in the worst % cases. We illustrate the CVaR at 5% achieved by each algorithm in Table 2. Notice that RATS always maximizes the CVaR compared to both DPsnapshot and DP. Indeed, even if the latter uses the true model, the optimized criteria in DP is the expected return.
7 Conclusion
We proposed an approach for robust zeroshot planning in nonstationary stochastic environments. We introduced the framework of LipchitzContinuous NonStationary MDPs (s) and derived the RiskAverse TreeSearch (RATS) algorithm, to predict the worstcase evolution and to plan optimally this worstcase . We analyzed RATS theoretically and showed that it approximates a worstcase with a control parameter that is the depth of the search tree. We showed empirically the benefit of the approach that searches for the highest lower bound on the worst achievable score. RATS is robust to every possible evolution of the environment, maximizing the expected worstcase outcome on the whole set of possible s. Our method was applied to the uncertainty on the evolution of a model. Generally, it could be extended to any uncertainty on the model used for planning, given bounds on the set of the feasible models. The purpose of this contribution is to lay a basis of worstcase analysis for robust solutions to s. As is, RATS is computationally intensive and scaling the algorithm to larger problems is an exciting future challenge.
References

Abel et al. [2018a]
David Abel, Dilip Arumugam, Lucas Lehnert, and Michael Littman.
State Abstractions for Lifelong Reinforcement Learning.
In
International Conference on Machine Learning
, pages 10–19, 2018a.  Abel et al. [2018b] David Abel, Yuu Jinnai, Sophie Yue Guo, George Konidaris, and Michael Littman. Policy and Value Transfer in Lifelong Reinforcement Learning. In International Conference on Machine Learning, pages 20–29, 2018b.
 Asadi et al. [2018] Kavosh Asadi, Dipendra Misra, and Michael L Littman. Lipschitz continuity in modelbased reinforcement learning. arXiv preprint arXiv:1804.07193, 2018.
 Bellman [1957] Richard Bellman. Dynamic programming. Princeton, USA: Princeton University Press, 1957.
 Browne et al. [2012] Cameron B. Browne, Edward Powley, Daniel Whitehouse, Simon M. Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43, 2012.
 Bubeck and Munos [2010] Sébastien Bubeck and Rémi Munos. Open loop optimistic planning. In 10th Conference on Learning Theory, 2010.

Campo et al. [1991]
L. Campo, P. Mookerjee, and Y. BarShalom.
State estimation for systems with sojourntimedependent Markov model switching.
IEEE Transactions on Automatic Control, 36(2):238–243, 1991.  Choi et al. [1999] Samuel P.M. Choi, Dityan Yeung, and Nevin L. Zhang. Hiddenmode Markov decision processes. In IJCAI Workshop on Neural, Symbolic, and Reinforcement Methods for Sequence Learning. Citeseer, 1999.
 Choi et al. [2000] Samuel P.M. Choi, DitYan Yeung, and Nevin L. Zhang. Hiddenmode Markov decision processes for nonstationary sequential decision making. In Sequence Learning, pages 264–287. Springer, 2000.
 Choi et al. [2001] Samuel P.M. Choi, Nevin L. Zhang, and DitYan Yeung. Solving hiddenmode Markov decision problems. In Proceedings of the 8th International Workshop on Artificial Intelligence and Statistics, Key West, Florida, USA, 2001.
 Chung et al. [2015] Jen Jen Chung, Nicholas R.J. Lawrance, and Salah Sukkarieh. Learning to soar: Resourceconstrained exploration in reinforcement learning. The International Journal of Robotics Research, 34(2):158–172, 2015.
 Csáji and Monostori [2008] Balázs Csanád Csáji and László Monostori. Value function based reinforcement learning in changing Markovian environments. Journal of Machine Learning Research, 9(Aug):1679–1709, 2008.
 Da Silva et al. [2006] Bruno C. Da Silva, Eduardo W. Basso, Ana L.C. Bazzan, and Paulo M. Engel. Dealing with nonstationary environments using context detection. In Proceedings of the 23rd International Conference on Machine Learning, pages 217–224. ACM, 2006.

Dabney et al. [2018]
Will Dabney, Mark Rowland, Marc G Bellemare, and Rémi Munos.
Distributional reinforcement learning with quantile regression.
In ThirtySecond AAAI Conference on Artificial Intelligence, 2018.  Dick et al. [2014] Travis Dick, Andras Gyorgy, and Csaba Szepesvari. Online learning in Markov decision processes with changing cost sequences. In International Conference on Machine Learning, pages 512–520, 2014.
 Doya et al. [2002] Kenji Doya, Kazuyuki Samejima, Kenichi Katagiri, and Mitsuo Kawato. Multiple modelbased reinforcement learning. Neural computation, 14(6):1347–1369, 2002.
 EvenDar et al. [2009] Eyal EvenDar, Sham M. Kakade, and Yishay Mansour. Online Markov Decision Processes. Mathematics of Operations Research, 34(3):726–736, 2009.
 Fudenberg and Tirole [1991] Drew Fudenberg and Jean Tirole. Game theory. Cambridge, Massachusetts, 393(12):80, 1991.
 Hadoux [2015] Emmanuel Hadoux. Markovian sequential decisionmaking in nonstationary environments: application to argumentative debates. PhD thesis, UPMC, Sorbonne Universités CNRS, 2015.
 Hadoux et al. [2014] Emmanuel Hadoux, Aurélie Beynier, and Paul Weng. Sequential decisionmaking under nonstationary environments via sequential changepoint detection. In Learning over Multiple Contexts (LMCE), 2014.
 Iyengar [2005] Garud N. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257–280, 2005.
 Jaulmes et al. [2005] Robin Jaulmes, Joelle Pineau, and Doina Precup. Learning in nonstationary partially observable Markov decision processes. In ECML Workshop on Reinforcement Learning in nonstationary environments, volume 25, pages 26–32, 2005.
 Kaelbling et al. [1998] Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(12):99–134, 1998.
 Keller and Helmert [2013] Thomas Keller and Malte Helmert. Trialbased heuristic tree search for finite horizon MDPs. In ICAPS, 2013.

Kleinberg et al. [2008]
Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal.
Multiarmed bandits in metric spaces.
In
Proceedings of the fortieth annual ACM symposium on Theory of computing
, pages 681–690. ACM, 2008.  Kocsis and Szepesvári [2006] Levente Kocsis and Csaba Szepesvári. Bandit based MonteCarlo planning. In European conference on machine learning, pages 282–293. Springer, 2006.
 Lecarpentier et al. [2017] Erwan Lecarpentier, Sebastian Rapp, Marc Melo, and Emmanuel Rachelson. Empirical evaluation of a QLearning Algorithm for Modelfree Autonomous Soaring. arXiv preprint arXiv:1707.05668, 2017.
 Lecarpentier et al. [2018] Erwan Lecarpentier, Guillaume Infantes, Charles Lesire, and Emmanuel Rachelson. Open loop execution of treesearch algorithms. IJCAI, 2018.
 Munos [2014] Rémi Munos. From bandits to montecarlo tree search: The optimistic principle applied to optimization and planning. Foundations and Trends® in Machine Learning, 7(1):1–129, 2014.
 Pazis and Parr [2013] Jason Pazis and Ronald Parr. PAC Optimal Exploration in Continuous Space Markov Decision Processes. In AAAI, 2013.
 Pirotta et al. [2015] Matteo Pirotta, Marcello Restelli, and Luca Bascetta. Policy gradient in lipschitz Markov Decision Processes. Machine Learning, 100(23):255–283, 2015.
 Puterman [2014] Martin L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014.
 Rachelson and Lagoudakis [2010] Emmanuel Rachelson and Michail G. Lagoudakis. On the locality of action domination in sequential decision making. 2010.
 Silver et al. [2013] Daniel L. Silver, Qiang Yang, and Lianghao Li. Lifelong Machine Learning Systems: Beyond Learning Algorithms. In AAAI Spring Symposium: Lifelong Machine Learning, volume 13, page 05, 2013.

Silver et al. [2016]
David Silver, Aja Huang, Chris .J Maddison, Arthur Guez, Laurent Sifre, George
Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda
Panneershelvam, Marc Lanctot, et al.
Mastering the game of Go with deep neural networks and tree search.
Nature, 529(7587):484, 2016.  Sutton et al. [1998] Richard S. Sutton, Andrew G. Barto, et al. Reinforcement learning: An introduction. MIT press, 1998.
 Szita et al. [2002] István Szita, Bálint Takács, and András Lörincz. mdps: Learning in varying environments. Journal of Machine Learning Research, 3(Aug):145–174, 2002.
 Villani [2008] Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
 Wiering [2001] Marco A. Wiering. Reinforcement learning in dynamic environments using instantiated information. In Machine Learning: Proceedings of the Eighteenth International Conference (ICML2001), pages 585–592, 2001.
Comments
There are no comments yet.