In this paper, we focus on developing methods for learning temporally extended actions [Sutton, Precup, and Singh1999] which are robust to model uncertainty. Temporally Extended Actions, also known as options [Sutton, Precup, and Singh1999], skills [da Silva, Konidaris, and Barto2012, Mankowitz, Mann, and Mannor2016b, Mankowitz, Mann, and Mannor2016a] or macro-actions [Hauskrecht et al.1998] have been shown both theoretically [Precup, Sutton, and Singh1998] and experimentally [Mann and Mannor2014] to result in faster convergence rates in RL planning algorithms. We refer to a Temporally Extended Action as an option from here on in. While much research has been dedicated to automatically learning options, e.g. [Şimşek and Barto2005, da Silva, Konidaris, and Barto2012, Mankowitz, Mann, and Mannor2016b, Mankowitz, Mann, and Mannor2016a, Bacon, Harb, and Precup2017], no work has, to the best of our knowledge, focused on learning options that are robust to model uncertainty.
To understand model uncertainty, consider a two-link robotic arm that is trying to lift a box (Figure 1). The arm with length can be modelled by a dynamical system , also referred to as the state-transition function or transition model. These terms will be used interchangeably throughout the paper. The transition model governs the dynamics of this arm. Different models and are generated for arms with lengths and respectively. All of these arms are attempting to perform the same task. An RL agent trained using model may not adequately perform the task using or . However, ideally the agent should be agnostic to the uncertainty in the model parameters and still be able to solve the task (i.e., lift the box).
Practical applications of RL rely on the following two-step blueprint: Step one - Build a Model: Models are attained in one of three ways - (1) A finite, noisy batch of data is acquired and a model is built based on this data; (2) A simplified, approximate model of the environment may be provided directly (e.g., power generation 111http://www.gridlabd.org/, mining etc); (3) A model of the environment is derived (e.g., dynamical systems). Step two - Learn a policy: RL methods are then applied to find a good policy based on this model. In cases (1) and (2), the parameters of the model are uncertain due to the noisy, finite data and the simplified model respectively. In case (3), model uncertainty occurs when the parameters of the physical agent are uncertain as discussed in the above-mentioned example. This is especially important for industrial robots that are periodically replaced with new robots that might not share the exact same physical specifications (and therefore have slightly different dynamical models). Learning a policy that is agnostic to the parameters of the model is crucial to being robust to model uncertainty.
We focus on (3): Learning policies in dynamical systems, using the robust MDP framework [Bagnell, Ng, and Schneider2001, Nilim and El Ghaoui2005, Iyenger2005], that are robust to model uncertainty (e.g., robots with different arm lengths). 222Note that our theoretical framework can also deal with model uncertainty in cases (1) and (2).
Why learn robust options? Previous works [Mankowitz, Mann, and Mannor2016b, Mankowitz, Mann, and Mannor2016a, Bacon, Harb, and Precup2017, Mankowitz, Tamar, and Mannor2017] have shown that options mitigate Feature-based Model Misspecification (FMM). In the linear setting, FMM occurs when a learning agent is provided with a limited policy feature representation that is not rich enough to solve the task. In the non-linear (deep) setting, FMM occurs when a deep network learns a sub-optimal set of features, resulting in sub-optimal performance. We show in our work that options are indeed necessary to mitigate FMM. However, as discussed in the above example (Figure 1) model uncertainty also results in sub-optimal performance. We show in our experiments that this is especially problematic in deep networks. Therefore, we learn robust options that mitigate both FMM and model uncertainty which we collectively term model misspecification 333We do acknowledge that other forms of model misspecification exist. However, for this work we focus on FMM and model uncertainty..
Policy Iteration (PI) [Sutton and Barto1998] is a powerful technique that is present in different variations [Lagoudakis and Parr2003, Konda and Tsitsiklis2000] in many RL algorithms. The Deep Q Network [Mnih2015] is one example of a powerful non-linear function approximator that employs a form of PI. Actor-Critic Policy Gradient (AC-PG) [Konda and Tsitsiklis1999, Sutton et al.2000] algorithms perform an online form of PI. As a result, we decided to perform option learning in a policy iteration framework.
We introduce the Robust Options Policy Iteration (ROPI) algorithm that learns robust options to mitigate model misspecification, with convergence guarantees. Our novel ROPI algorithm consists of two steps, illustrated in Figure 1, which include a Policy Evaluation (PE) step and a Policy Improvement (PI) step. For PE, we utilize RPVI [Tamar, Mannor, and Xu2014] to perform policy evaluation and learn the value function parameters ; We then perform PI using the robust policy gradient (discussed in Section 4). This process is repeated until convergence. ROPI learns robust options and a robust inter-option policy
, and has theoretical convergence guarantees. We showcase the algorithm in both linear and non-linear (deep) feature settings.
In the linear setting we show that the non-robust version of a linear option learning algorithm called ASAP [Mankowitz, Mann, and Mannor2016a], learns an inherently robust set of options. This, we claim, is due to the coarseness of the chosen feature representation. This provides evidence that, in some cases, linear approximate dynamic programming algorithms may get robustness ‘for free’.
However, in the non-linear (deep) setting, explicitly incorporating robustness into the learning algorithm is crucial to being robust to model uncertainty. We incorporate ROPI into the Deep Q Network to form a Robust Options Deep Q Network (RO-DQN). Using the RO-DQN, the agent learns a robust set of options to solve multiple tasks (two dynamical systems) and mitigates model misspecification.
Main contributions: (1) Learning robust options using our novel ROPI algorithm with convergence guarantees; This includes developing a Robust Policy Gradient (R-PG) framework, which includes a robust compatibility condition; (2) A linear version of ROPI that is able to mitigate model misspecification in Cartpole; (3) Experiments which suggest that linear approximate dynamic programming algorithms may get robustness ‘for free’ by utilizing a coarse feature representation. (4) The RO-DQN, which solves multiple tasks by learning robust options, using ROPI, to mitigate model misspecification.
In this section, we relate the background material to the relevant module in the ROPI algorithm (PE or PI as shown in Figure 1).
Robust Markov Decision Process (PE):
A Robust Markov Decision Process (RMDP)[Bagnell, Ng, and Schneider2001, Iyenger2005, Nilim and El Ghaoui2005] is represented by where is a finite set of states; is a finite set of actions, is the immediate reward, which is bounded and deterministic, is the discount factor. Let be the transition function, mapping from a given state and action to a measure over next states. Given and
, nature is allowed to choose a transition to a new state from a family of transition probability functions. This family is called the uncertainty set [Iyenger2005]. Uncertainty in the state transitions is therefore represented by .
The goal in a Robust MDP is to learn a robust policy [Iyenger2005]444In robust MDP literature the policy is often deterministic for notational convenience. A stochastic policy can be trivially derived by incorporating the uncertainty into the transitions, which is a function mapping states to actions, that maximizes the worst case performance of the robust value function where is the uncertainty set over state transitions; is the bounded immediate reward and is the worst-case expected return from state [Iyenger2005, Nilim and El Ghaoui2005]. In order to solve this value function using policy evaluation, we define the robust operator for a given state and action where and [Iyenger2005]. They also defined the operator for a fixed policy such that . Using this operator, the robust value function is given by the following matrix equation . It has been previously shown that the robust Bellman operator for a fixed policy is a contraction in the sup-norm [Iyenger2005] and the robust Bellman operator is also a contraction with (the optimal value function for policy ) as its fixed point [Iyenger2005].
Robust Projected Value Iteration (PE): Most of the Robust MDP literature has focused on small to medium-sized MDPs. [Tamar, Mannor, and Xu2014] provide an approach capable of solving larger or continuous MDPs using function approximation. Suppose the value function is represented using linear function approximation [Sutton and Barto1998]: , where
is a d-dimensional feature vector andis a set of parameters. Robust Projected Value Iteration (RPVI) [Tamar, Mannor, and Xu2014] involves solving the equation , where is a projection operator onto the subspace spanned by . Tamar et. al. show that is a contraction with respect to the sup-norm555A variation of this has also been shown for the average reward case [Tewari and Bartlett2007].. This results in the update equation which can be sampled from data and solved efficiently for parameterized uncertainty sets that are convex in the parameters [Tamar, Mannor, and Xu2014]. Here, is a matrix with linearly independent feature vectors in its rows and where is the state distribution for a policy . RPVI utilizes a deterministic policy for notational convenience, but we assume a stochastic policy for the remainder of this paper. Note that this equation can be written as a robust critic update in Actor Critic Policy Gradient (AC-PG) algorithms, with a robust TD error , where the robust TD error is defined in Equation 1. The projection has been omitted since it can be viewed as a dynamic learning rate (see the Appendix for more details).
Policy Gradient (PI):
Policy gradient is a standard technique in Reinforcement Learning that is used to estimate the parametersthat maximize a performance objective Sutton, Precup, and Singh1999]. A typical performance objective is the discounted expected return where is the reward at time , is the discount factor and is a given start state. The gradient has been previously shown to be:
where is the discounted state distribution and is an approximation of the action value function . This gradient is then used to update the parameters for a stepsize .
Options (PI): A Reinforcement Learning option [Sutton, Precup, and Singh1999, Konidaris and Barto2009] consists of a three-tuple , where is the set of initiation states from which the option can be executed; is the intra-option policy, parameterized by
and is a mapping from states to a probability distribution over actions;indicates the probability of the option terminating when in state .
Option Learning (PI): There are some recent approaches to option learning but the approach we will focus on is the Adaptive Skills, Adaptive Partitions (ASAP) framework [Mankowitz, Mann, and Mannor2016a], which enables an agent to automatically learn both a hierarchical policy and a corresponding option set. The hierarchical policy learns where to execute options based on learning intersections of hyperplane half-spaces that divide up the state space. Figure 1 contains an example of two option hyperplanes, and , whose intersection divides the state space into four option partitions. Each option partition contains an option . The options and option partitions are learned using a policy gradient approach and are represented within the ASAP policy. The ASAP policy is a function mapping a state to a probability distribution over actions and is defined as follows:
(ASAP Policy). Given option hyperplanes, a set of options , and an MDP from a space of possible MDPs, the ASAP policy is defined as where is the probability of executing option given that the agent is in state and MDP ; represents the probability that, given option is executing, option will choose action .
Deep Q Networks (PE+PI): The DQN algorithm [Mnih2015] is a powerful Deep RL algorithm that has been able to solve numerous tasks from Atari video games to Minecraft [Tessler et al.2017]. The DQN stores its experiences in an experience replay buffer [Mnih2015] to remove data correlation issues. It learns by minimizing the Temporal Difference (TD) loss. Typically, a separate DQN is trained to solve each task. Other works have combined learning multiple tasks into a single network [Rusu2015] but require pre-trained experts to train an agent in a supervised manner. Recently, robustness has been incorporated into a DQN [Shashua and Mannor2017]. However, no work has, to the best of our knowledge, incorporated robust options into a DQN algorithm to solve multiple tasks in an online manner.
Throughout this paper we make the following assumptions which are standard in Policy Gradient (PG) literature [Bhatnagar et al.2009]: Assumption (A1): Under any policy
, the Markov chain resulting from the Robust MDP is irreducible and aperiodic.Assumption (A2): A policy for any , pair is continuously differentiable with respect to the parameter . In addition, we make Assumption (A3): The optimal value function is found within the hypothesis space of function approximators that are being utilized. We use these assumptions to define the robust transition probability function and the robust steady-state distribution for the discounted setting. We then use these definitions to derive the robust policy gradient version of Equation 2.
3.1 Robust Transition Probability Distribution
The robust value function is defined for a policy as and the robust action value function is given by where . Here, is the transition probability distribution that minimizes the expected return for a given state and action and belongs to the pre-defined uncertainty set . Since
is selected independently for each state we can construct a stochastic matrixwhere each row is defined by .
3.2 Robust State Distribution
The matrix can be interpreted as an adversarial distribution in a zero-sum game if the adversary fixes its worst case strategy [Filar and Vrieze2012].
Given the initial state distribution , the robust discounted state distribution is: .
The robust discounted state distribution is the same as the state distribution used by [Sutton et al.2000] for the discounted setting. However, the transition kernel is selected robustly rather than assumed to be the transition kernel of the target MDP. The robust discounted state distribution intuitively represents executing the transition probability model that leads the agent to the worst (i.e., lowest value) areas of the state space.
4 Robust Policy Gradient
Using the above definitions, we can now derive the Robust Policy Gradient (R-PG) for the discounted setting, which is used for policy improvement in ROPI666Similar results are obtained for the average reward setting.. To derive the R-PG, we (1) define the robust performance objective and (2) derive the corresponding robust compatibility conditions which enables us to incorporate a function approximator into the policy gradient.
4.1 Robust Performance Objective
R-PG optimizes the discounted expected reward objective where is a given uncertainty set; is a discount factor and is a bounded reward at time . Next we define the robust action value function as and we denote the robust state value function as . The robust policy is parameterized by the parameters . We wish to maximize to obtain the optimal set of policy parameters
4.2 Robust Policy Gradient
Given the robust performance objective with respect to the robust discounted state distribution, we can now derive the robust policy gradient with respect to the robust policy . is parameterized by unless otherwise stated. As in [Sutton et al.2000], we derive the gradient using the robust formulation for the discounted scenario. The discounted expected reward case is presented as Lemma 1. The main differences between this Lemma and that of [Sutton et al.2000] is that we incorporate the robust state distribution and emit a transition distribution leading the agent to the areas of lowest value at each timestep.
Suppose that we are maximizing the robust performance objective from a given start state with respect to a policy parameterized by and the robust action value function is defined as . Then the gradient with respect to the performance objective is:
The vectorized robust gradient update is therefore . It is trivial to incorporate a baseline into the lemma that does not bias the gradient leading to the gradient update equation: .
4.3 Robust Compatibility Conditions
The above robust gradient update does not as yet possess the ability to incorporate function approximation. However, by deriving robust compatibility conditions, we can replace with a linear function approximator . Here represents the approximator’s parameters and which represent the compatibility features. The robust compatibility features are presented as Lemma 2. Note that this compatibility condition is with respect to the robust state distribution .
Let be an approximation to If minimizes the mean squared error and is compatible such that it satisifes , then
5 Robust Options Policy Iteration (ROPI)
Given the robust policy gradient, we now present ROPI defined as Algorithm 1. A parameterized uncertainty set with parameters and a nominal model without uncertainty
are provided as input to ROPI. In practice, the uncertainty set, for example, can be confidence intervals specifying plausible values for the mean of a normal distribution. The nominal model can be the same normal distribution without the confidence intervals. At each iteration, trajectories are generated (Line 3) using the nominal modeland the current option policy . These trajectories are utilized to learn the critic parameters in line using RPVI [Tamar, Mannor, and Xu2014]. As stated in the background section, RPVI converges to a fixed point. Once it has converged to this fixed point, we then use this critic to learn the option policy parameters (Line 5) using the R-PG update. This is the policy improvement step. This process is repeated until convergence. The convergence theorem is presented as Theorem 1.
Let and be any differentiable function approximators for the option policy and value function respectively that satisfy the compatibility condition derived above and for which the option policy is differentiable up to the second derivative with respect to That is, Define be any step-size sequence satisfying and . Then the sequence defined by any , and
converges such that .
We performed the experiments in two, well-known continuous domains called CartPole and Acrobot 777https://gym.openai.com/. The transition dynamics (models) of both Cartpole and Acrobot can be modelled as dynamical systems. For each experiment, the agent is faced with model misspecification. That is, Feature-based Model Misspecification (FMM) and model uncertainty. In each experiment, the agent mitigates FMM by utilizing options [Mankowitz, Mann, and Mannor2016b, Mankowitz, Mann, and Mannor2016a, Mankowitz, Mann, and Mannor2014] and model uncertainty by learning robust options using ROPI. We analyze the performance of ROPI in the linear and non-linear feature settings. In the linear setting, we apply ROPI to the Adaptive Skills, Adaptive Partitions (ASAP) [Mankowitz, Mann, and Mannor2016a] option learning framework. In the non-linear (deep) setting, we apply ROPI to our Robust Options DQN (RO-DQN) Network.
The experiments are divided into two parts. In Section 6.4, we show that ROPI is not necessary as the learned linear ‘non-robust’ options for solving CartPole provide a natural form of robustness and mitigate model misspecification. This provides some evidence that linear approximate dynamic programming algorithms which use coarse feature representations may, in some cases, get robustness ‘for free’. The question we then ask is whether this natural form of robustness is present in the deep setting? We show that this is not the case in our experiments in Section 6.5. Here, robust options, learned using ROPI, are necessary to mitigate model misspecification. In each experiment, we compare (1) the misspecified agent (i.e., a policy that solves the task sub-optimally due to FMM and model uncertainty); (2) The ‘non-robust’ option learning algorithm that mitigates FMM and (3) The robust option learning ROPI algorithm that mitigates FMM and model uncertainty (i.e., model misspecification).
Acrobot is a planar two-link robotic arm in a vertical plane (working against gravity). The robotic arm contains an actuator at the elbow, but no actuator at the shoulder as shown in Figure 2. We focus on the swing-up task whereby the agent needs to actuate the elbow actuator to generate a motion that causes the arm to swing up and reach the goal height shown in the figure. The state space is the -tuple which consists of the shoulder angle, shoulder angular velocity, elbow angle and elbow angular velocity respectively. The action space consists of torques applied to the elbow of or . Rewards of are received while the agent has not reached the goal, and received upon reaching the goal. The episode length is timesteps.
The CartPole system involves balancing a pole on a cart in a vertical position as shown in Figure 2. This domain is modelled as a continuous state MDP. The continuous state space consists of the -tuple which represent the cart location, cart horizontal speed, pole angle with respect to the vertical and the pole speed respectively. The available set of actions are constant forces applied to the cart in either the left or right direction. The agent receives a reward of for each timestep that the cart balances the pole within the goal region (in our case, degrees from the central vertical line) as shown in the figure. If the agent terminates early, a reward of is received. The length of each episode is timesteps and therefore the maximum reward an agent receives is over the course of an episode.
6.2 Uncertainty Sets
For each domain, we generated an uncertainty set . In Cartpole, the uncertainty set is generated by fixing a normal distribution over the length of the pole , and sampling lengths from this distribution in the range meters prior to training. Each sampled length is then substituted into the cartpole dynamics equations generating different transition functions. A robust update is performed by choosing the transition function from the uncertainty set that generates the worst case value. In Acrobot, the uncertainty set is generated by fixing a normal distribution over the mass of the arm link between the shoulder and the elbow. Five masses are sampled from this distribution from Kgs and generated the corresponding transition functions.
6.3 Nominal Model
During training, in both Cartpole and Acrobot, the agent transitions according to the nominal transition model. In Cartpole, the nominal model corresponds to a pole length of meters. In Acrobot, the nominal model corresponds to an arm mass of Kg. During evaluation, the agent executes its learned policy on transition models with different parameter settings (i.e., systems with different arm lengths in Cartpole and different masses in Acrobot).
6.4 Linear ROPI: ASAP
We first tested an online variation of ROPI on the Cartpole domain using linear features. To do so, we implemented a robust version of Actor Critic Policy Gradient (AC-PG) where the critic is updated using the robust TD error as shown in Equation 1. We used a constant learning rate which worked well in practice. The critic utilizes coarse binary features which contain bins for each dimension respectively. We provide the actor with a limited policy representation, which is a probability distribution over the actions (left and right), independent of the state.
We then trained the agent on the nominal pole length of meters. For evaluation, we averaged the performance of each learned policy over episodes per parameter setting, where the parameters were pole-lengths in the range meters. As seen in Figure 3, the agent cannot solve the task using the limited policy representation, for any pole length, resulting in FMM. To mitigate the misspecification, we learn non-robust options using the ASAP algorithm [Mankowitz, Mann, and Mannor2016a]. Using a single option hyperplane (see Section 2), ASAP learns two options where each option’s intra-option policy contains the same limited policy representation as before. It is expected that the ASAP options mitigate the FMM and solve for pole lengths around the nominal pole length of meters on which it is trained. It should however struggle to solve the task for significantly different pole lengths (i.e., model uncertainty).
To our surprise, the ASAP option learning algorithm was able to solve for all pole lengths over meters as shown in Figure 3, even though it was only trained on the nominal pole length of meters. Even after grid searches over all of the learning parameters, the agent still solved the task across these pole lengths. This is compared to a robust version of ASAP (Figure 3) that mitigated the misspecification and solved the task across multiple pole lengths as was expected.
We decided to analyze the learned ‘non-robust’ options from the ASAP algorithm. Figure 2 shows the learned option hyperplane that separates the domain into two different options. The axis represents and the y axis . The red region indicates a learned option that always executes a force in the right direction. The blue region indicates a learned option that executes a force in the left direction. The learned option execution regions cover approximately half of the state space for each option. Therefore, if the agent is at point X in Figure 2, and the pole length varies (e.g., and in the figure), the transition dynamics will propagate the agent to slightly different regions in state space in each case. However, these transitions generally keep the agent in the correct option execution region, due to the coarseness of the option partitions, providing a natural form of robustness. This is an interesting observation since it provides evidence that linear approximate dynamic programming algorithms with coarse feature representations may, in some cases, get robustness ‘for free’. The question now is whether this natural form of robustness can be translated to the non-linear (deep) setting?
6.5 Non-linear ROPI: RO-DQN
In the non-linear (deep) setting, we train an agent to learn robust options that mitigate model misspecification in a multi-task scenario888In our setup, multi-task learning better illustrates the use-case of robust options mitigating model misspecification, compared to the single task setup where the use-case is less clear.. Here, the learning agent needs to learn an option to solve Cartpole and an option to solve Acrobot using a common shared representation (i.e., a single network). The single network we use for each experiment is a DQN variant consisting of fully-connected hidden layers withepisodes (unless the tasks are solved earlier). For evaluation, each learned network is averaged over episodes per parameter setting (i.e., parameter settings include pole lengths from meters for Cartpole and masses from Kgs for Acrobot).
In this setting, the DQN network struggles to learn good features to solve both tasks simultaneously using a common shared representation. It typically oscillates between sub-optimally solving each task, resulting in model misspecification999While different modifications can potentially be added to the DQN to improve performance [Anschel, Baram, and Shimkin2017, Van Hasselt, Guez, and Silver2016, Wang et al.2015, Ioffe and Szegedy2015], the goal of this work is to show that without these modifications, options can be used to mitigate the model misspecification.. The average performance of the trained DQN on CartPole and Acrobot across different parameter settings is shown in Figures 3 and 3 respectively.
We therefore add options to mitigate the model misspecification. The Option DQN (O-DQN) network utilizes two ‘option’ heads by duplicating the last hidden layer. The training of these heads is performed in an alternating optimization fashion in an online manner (as opposed to policy distillation which uses experts to learn the heads with supervised learning[Rusu2015]
). That is, when executing an episode in Cartpole or Acrobot, the last hidden layer corresponding to Cartpole or Acrobot is activated respectively and backpropagation occurs with respect to the relevant option head. This network is able to learn options that solve both tasks as seen in Figures3 and 3 for CartPole and Acrobot respectively. However, as the parameters of the tasks change (and therefore the transition dynamics), the option performance of the O-DQN in both domains degrades. This is especially severe in Cartpole as seen in the figure. Here, robustness is crucial to mitigating model misspecification due to uncertainty in the transition dynamics.
We incorporated robustness into the O-DQN to form the Robust Option DQN (RO-DQN) network. This network performs an online version of ROPI and is able to learn robust options to solve multiple tasks in an online manner. The main difference is that the DQN loss function now incorporates the robust TD update discussed in Section 2. More specifically, the robust TD error is calculated as
[Shashua and Mannor2017]. The RO-DQN was able to learn options to solve both CartPole and Acrobot across the range of parameter settings as seen in Figures 3 and 3 respectively (see the Appendix for a combined average reward graph).
We have presented the ROPI framework that is able to learn options that are robust to uncertainty in the transition model dynamics. ROPI has convergence guarantees and requires deriving a Robust Policy Gradient and the corresponding robust compatibility conditions. This is the first work of its kind that has attempted to learn robust options. In our experiments, we have shown that the linear options learned using the ‘non-robust’ ASAP algorithm have a natural form of robustness when solving CartPole, due to the coarseness of the option execution regions. However, this does not translate to the deep setting. Here, robust options are crucial to mitigating model uncertainty and therefore model misspecification. We utilized ROPI to learn our Robust Options DQN (RO-DQN). RO-DQN learned robust options to solve Acrobot and Cartpole for different parameter settings respectively. Robust options can be used to bridge the gap in sim-to-real robotic learning applications between robotic policies learned in simulations and the performance of the same policy when applied to the real robot. This framework also provides the building blocks for incorporating robustness into continual learning applications [Tessler et al.2017, Ammar, Tutunov, and Eaton2015] which include robotics and autonomous driving.
This research was supported by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 306638 (SUPREL).
- [Ammar, Tutunov, and Eaton2015] Ammar, H. B.; Tutunov, R.; and Eaton, E. 2015. Safe policy search for lifelong reinforcement learning with sublinear regret. ICML.
- [Anschel, Baram, and Shimkin2017] Anschel, O.; Baram, N.; and Shimkin, N. 2017. Deep reinforcement learning with averaged target dqn. ICML.
- [Bacon, Harb, and Precup2017] Bacon, P.-L.; Harb, J.; and Precup, D. 2017. The option-critic architecture. AAAI.
- [Bagnell, Ng, and Schneider2001] Bagnell, J.; Ng, A. Y.; and Schneider, J. 2001. Solving uncertain markov decision problems. Robotics Institute, Carnegie Mellon University.
- [Bhatnagar et al.2009] Bhatnagar, S.; Sutton, R. S.; Ghavamzadeh, M.; and Lee, M. 2009. Natural actor–critic algorithms. Automatica 45(11):2471–2482.
- [da Silva, Konidaris, and Barto2012] da Silva, B.; Konidaris, G.; and Barto, A. 2012. Learning parameterized skills. In ICML.
- [Filar and Vrieze2012] Filar, J., and Vrieze, K. 2012. Competitive Markov decision processes. Springer Science & Business Media.
- [Hauskrecht et al.1998] Hauskrecht, M.; Meuleau, N.; Kaelbling, L. P.; Dean, T.; and Boutilier, C. 1998. Hierarchical solution of markov decision processes using macro-actions. In Proceedings of the 14th Conference on Uncertainty in AI, 220–229.
- [Ioffe and Szegedy2015] Ioffe, S., and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
- [Iyenger2005] Iyenger, G. 2005. Robust dynamic programming. Mathematics of Operations Research (2).
- [Konda and Tsitsiklis1999] Konda, V. R., and Tsitsiklis, J. N. 1999. Actor-critic algorithms. In NIPS, volume 13, 1008–1014.
- [Konda and Tsitsiklis2000] Konda, V. R., and Tsitsiklis, J. N. 2000. Actor-critic algorithms. In Advances in neural information processing systems, 1008–1014.
- [Konidaris and Barto2009] Konidaris, G., and Barto, A. G. 2009. Skill discovery in continuous reinforcement learning domains using skill chaining. In NIPS 22, 1015–1023.
[Lagoudakis and Parr2003]
Lagoudakis, M. G., and Parr, R.
Least-squares policy iteration.
Journal of machine learning research4(Dec):1107–1149.
- [Mankowitz, Mann, and Mannor2014] Mankowitz, D. J.; Mann, T. A.; and Mannor, S. 2014. Time regularized interrupting options. ICML.
- [Mankowitz, Mann, and Mannor2016a] Mankowitz, D. J.; Mann, T. A.; and Mannor, S. 2016a. Adaptive Skills, Adaptive Partitions (ASAP). NIPS.
- [Mankowitz, Mann, and Mannor2016b] Mankowitz, D. J.; Mann, T. A.; and Mannor, S. 2016b. Iterative Hierarchical Optimization for Misspecified Problems (IHOMP). EWRL.
- [Mankowitz, Tamar, and Mannor2017] Mankowitz, D. J.; Tamar, A.; and Mannor, S. 2017. Situationally aware options. arXiv.
- [Mann and Mannor2014] Mann, T. A., and Mannor, S. 2014. Scaling up approximate value iteration with options: Better policies with fewer iterations. In Proceedings of the ICML.
- [Mnih2015] Mnih, V. e. a. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529–533.
- [Nilim and El Ghaoui2005] Nilim, A., and El Ghaoui, L. 2005. Robust control of markov decision processes with uncertain transition matrices. Operations Research 53(5):780–798.
- [Precup, Sutton, and Singh1998] Precup, D.; Sutton, R. S.; and Singh, S. 1998. Theoretical results on reinforcement learning with temporally abstract options. In ECML-98. Springer. 382–393.
- [Rusu2015] Rusu, A. A. e. a. 2015. Policy Distillation. arXiv 1–12.
- [Shashua and Mannor2017] Shashua, S. D.-C., and Mannor, S. 2017. Deep robust kalman filter. arXiv preprint arXiv:1703.02310.
- [Şimşek and Barto2005] Şimşek, Ö., and Barto, A. G. 2005. Learning skills in reinforcement learning using relative novelty. In Abstraction, Reformulation and Approximation. Springer. 367–374.
- [Sutton and Barto1998] Sutton, R., and Barto, A. 1998. Reinforcement Learning: An Introduction. MIT Press.
- [Sutton et al.2000] Sutton, R. S.; McAllester, D.; Singh, S.; and Mansour, Y. 2000. Policy gradient methods for reinforcement learning with function approximation. In NIPS, 1057–1063.
- [Sutton, Precup, and Singh1999] Sutton, R. S.; Precup, D.; and Singh, S. 1999. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. AI 112(1):181–211.
- [Tamar, Mannor, and Xu2014] Tamar, A.; Mannor, S.; and Xu, H. 2014. Scaling up robust MDPs using function approximation. ICML 2:1401–1415.
- [Tessler et al.2017] Tessler, C.; Givony, S.; Zahavy, T.; Mankowitz, D. J.; and Mannor, S. 2017. A deep hierarchical approach to lifelong learning in minecraft. AAAI.
[Tewari and Bartlett2007]
Tewari, A., and Bartlett, P. L.
Bounded parameter markov decision processes with average reward
International Conference on Computational Learning Theory, 263–277. Springer.
- [Van Hasselt, Guez, and Silver2016] Van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep reinforcement learning with double q-learning. In AAAI, 2094–2100.
- [Wang et al.2015] Wang, Z.; Schaul, T.; Hessel, M.; van Hasselt, H.; Lanctot, M.; and de Freitas, N. 2015. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581.