1 Introduction
Reinforcement Learning (RL) has been a driving force of many recent AI breakthroughs, such as systems capable of matching or surpassing human performance in Jeopardy! Tesauro et al. (2013), Go Silver et al. (2016, 2017), Atari games Mnih et al. (2015), autonomous driving Chen et al. (2017b, a) and conversing with humans Serban et al. (2017). In these domains, the RL policy makes stepbystep decisions, operating at the finestgrained time scale. Such an approach is not feasible in several other domains where the delayed credit assignment is a major challenge. A principled approach to combat this challenge is to use temporally extended actions, allowing agents to operate at a higher level of abstraction by having actions being executed for different amounts of time. Temporally extended actions lead to hierarchical control architectures. They provide a divideandconquer approach to tackle complex tasks by learning to compose the right set of subroutines to succeed in the task. It also potentially allows much faster adaptation to changes in the environment, which may only involve small changes in the task hierarchy (e.g., changing the goal state in maze navigation task). In this work we are interested in developing efficient methods for automatically learning task hierarchies in RL. We focus on the options framework Sutton et al. (1999b), a general framework for modeling temporally extended actions.
Autonomous option discovery has been subject of extensive research since the late 90’s, with a large number of HRL algorithms being based on the options framework Simsek and Barto (2004); Daniel et al. (2016); Florensa et al. (2017); Konidaris and Barto (2009); Machado et al. (2017c); Mankowitz et al. (2016); McGovern and Barto (2001). In this paper we explore the recently introduced concept of eigenoptions (EOs) Machado et al. (2017a), a set of diverse options, obtained from a learned latent representation, that allows efficient exploration. Such an idea is promising and quite different from others in the literature. However, existing algorithms for eigenoption discovery have the following limitations: 1) They have two clear distinct phases that can incur significant storage and computational costs, namely (i) learning the intraoption policies and (ii) learning the policy over options to complete the task; 2) They are only defined for problems with discrete statespaces and 3) they do not support the use of the environment’s reward function in the eigenoption discovery process. In this paper we introduce an algorithm for eigenoption discovery that addresses these issues. We do so by porting the idea of eigenoptions to the the optioncritic (OC) architecture Bacon et al. (2017), which is capable of simultaneously learning intraoption policies and the policy over options; and by using the Nyström approximation to generalize eigenoptions to problems with continuous statespaces. We term our algorithm Eigenoptioncritic (EOC). From the OC’s perspective, our algorithm can be seen as extending the OC architecture to be able to better deal with nonstationary environments and to be able to generate more diverse options that are general enough to be transferable across tasks.
2 Preliminaries and Notations
We consider sequential decisionmaking problems formulated as Markov decision processes (MDPs). An MDP is specified by a quintuple
, where is the state space, is the action space, is the statetransition model, is the reward function, and is a discount factor. The goal of an RL agent is to find a control policy that maximizes the expected discounted return . The policy can be derived from a value function such as the statevalue function or the stateaction value function . Specifically, we have. In RL settings we use samples collected through agentenvironment interactions to estimate the value functions.
Options in HRL, are defined by a set of trituples , where is the total number of options, is the initiation set of option , is the stochastic termination condition of option , and is the stochastic intraoption policy of option .
The OptionCritic architecture is a policy gradient method that provides a general framework for option discovery in HRL Bacon et al. (2017). The OC architecture facilitates joint option learning and automatic option discovery on the fly. Specifically, it is based on the following stateoption value and stateactionoption value:
(1)  
(2) 
where . One can use the policy gradient theorem Sutton et al. (1999a) to obtain the gradient of the expected discounted return with respect to the intraoption policy parameter and to the option termination function parameter , respectively:
(3)  
(4) 
where . The OC architecture simultaneously learns intraoption policies, termination conditions, and the policy over options through two steps: 1) Critic step: evaluating (Eq. 1) and (Eq. 2) with TD errors; and 2) Actor step: estimating Eq. 3 and Eq. 4 to update the parameters of the termination functions and of the intraoption policies. Notice that the OC algorithm is a policy gradient method focused on maximizing the return. It does not try to capture the topology of the statespace and it does not incentivize the agent to keep exploring the environment. Thus, the OC can be inefficient for solving nonstationary tasks as the discovered options tend to be tailored for a particular task instead of being general for better transferability.
3 The EigenoptionCritic Framework
Eigenoptions Machado et al. (2017a)
are a set of options obtained from the eigenvectors of the graph Laplacian generated by the undirected graph formed by the state transitions induced by the MDP. Specifically, assume an adjacency matrix
can be estimated through a Gaussian kernel with each element specified as , upon which we can build a combinatorial graph Laplacian , where is a diagonal matrix with the th diagonal element defined as. By performing eigenvalue decomposition of
one obtains eigenvectors, also known in the RL community as protovalue functions (PVFs) Mahadevan and Maggioni (2007a). The PVFs capture largescale temporal properties of a diffusion process and have been shown to be useful for representation learning Mahadevan and Maggioni (2007b) and have been used as intrinsic reward signals for guiding exploration Machado et al. (2017a, c). Specifically, given , the th eigenvector of , one can specify an intrinsic reward function as:(5) 
where denotes the feature representation of a given state. This reward can be seen as an intrinsic motivation for the agent to explore the environment. Essentially, defines a diffusion model, which captures the information flow on a graph or a manifold (the topology of the underlying state space).
Compute eigenfunction value at
, intrinsic reward using and mixed reward using equations (7), (5), and (6), respectively 11 1. Update the critic (Option evaluation): 12 , 13 if is not terminal then 14 , , 15 , 16 2. Update the actor (Options improvement): 17 , 18 if terminates in then 19 choose new option according to soft() 20 213.1 EigenoptionCritic Framework
Previous work on eigenoptions Machado et al. (2017a) maximizes rewards in a two phase procedure: offline eigenoption discovery and online reward maximization. In this work we propose to learn eigenoptions online by integrating the intrinsic reward into the OC architecture. By doing so the agent can learn eigenoptions while it does not observe extrinsic rewards, which can be interpreted as an auxiliary task Jaderberg et al. (2017). This can also be seen as promoting exploration before the agent stumbles upon the goal state. We define the following mixed reward function as a convex combination of the extrinsic and intrinsic rewards:
(6) 
The motivation is that we learn options that encode information about diffusion, but do not ignore the extrinsic rewards. Note that we can recover the original options in OC and EOs when is set to and respectively. Also, note that the mixed reward (Eq. 6) is only used for learning options, while the policy over options is still learned by extrinsic rewards, since it is the only useful feedback for evaluating task performance. Algorithm 1 summarizes the EOC algorithm in the tabular case.
3.2 Generalization of the EigenoptionCritic Framework to Continuous Domains
In this section, we discuss the extension of the Eigenoptioncritic framework to problems with continuous statespaces. Although the OC architecture naturally handles continuous domains, the EO framework was not defined for that. We fill this gap by showing how one can generalize eigenvectors to eigenfunctions and how one can interpolate the value of eigenvectors computed on sampled states to a novel state. This generalization is achieved via the Nyström approximation
Mahadevan and Maggioni (2007b),(7) 
where is the th eigenvalue, is the th eigenvector value at an anchor point , is the degree of , a new vertex on the graph, and is the degree of an anchor point . Note that is determined when a new state is encountered in the online learning phase, whereas is precomputed when the graph is constructed.
4 Experiments
We validate the proposed method on two benchmarks under a nonstationary setting, where goal locations change periodically, which the agent is unaware of. We use the same parameters reported by Bacon et al. Bacon et al. (2017) in our experiments. Due to space limit, readers are referred to their paper for details.
Fourrooms domain The first experiment is performed on a navigation task in the 4rooms domain Sutton et al. (1999b), which has discrete statespace. The initial state is randomly generated for each episode. After every
episodes the goal location changes to a new one following a uniform distribution over valid positions. The performance is measured by the number of steps the agent takes to reach the goal, and is shown on Figure
1. It can be seen that, for the first goal setting, the OC architecture outperforms other methods when enough samples are used. This is due to the fact that OC’s learning mechanism is solely based on extrinsic rewards, which is only obtained when the agent hits the goal location. Whereas for EOC the intrinsic reward, which allows more efficient exploration, is also used for updating intraoption policies. When the goal location changes, because of more efficient exploration, EOC outperforms OC for most of the subsequent tasks, especially when the goal location changes between two rooms. EOC with is the setting that achieves the best performance, suggesting that the combination of both approaches is indeed promising.Pinball domain The second experiment is performed on the pinball domain Konidaris and Barto (2009), which has a continuous state space. The agent starts in the upper left corner in every episode. The goal location changes after 250 episodes (red circle in Figure 2). In this domain, we used intraoption Qlearning with linear function approximation over Fourier basis. We used the data collected from the first 10 episodes to build a Kdtree and we search k=15 nearest neighbors to perform the Nyström approximation of the eigenfunctions. We plot the undiscounted return on Figure 1. Similar to the first experiment, except for the first task, EOC outperforms OC when using the same number of options in subsequent tasks.
5 Conclusions
This paper presents a novel algorithm termed eigenoptioncritic (EOC) for hierarchical reinforcement learning. EOC extends the optioncritic (OC) method by augmenting the extrinsic reward signals with eigenpurposes, intrinsic rewards defined by eigenvectors of the graph Laplacian of the statespace. It allows online eigenoption discovery and it is also applicable to continuous state spaces. Our experiments show that, in general, EOC outperforms OC in both discrete and continuous benchmarks when nonstationarity is introduced. For future work we will test the integration of EOC with deep neural networks on domains such as Atari games
Bellemare et al. (2013); Machado et al. (2017b) and highdimensional robotic controls Duan et al. (2016).References

Bacon et al. (2017)
P. Bacon, J. Harb, and D. Precup.
The optioncritic architecture.
In
Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence, February 49, 2017, San Francisco, California, USA.
, pages 1726–1734, 2017.  Bellemare et al. (2013) M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research, 47:253–279, 2013.
 Chen et al. (2017a) Y. Chen, M. Everett, M. Liu, and J. P. How. Socially aware motion planning with deep reinforcement learning. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017a.
 Chen et al. (2017b) Y. Chen, M. Liu, M. Everett, and J. P. How. Decentralized noncommunicating multiagent collision avoidance with deep reinforcement learning. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 285–292, May 2017b. doi: 10.1109/ICRA.2017.7989037.
 Daniel et al. (2016) C. Daniel, H. van Hoof, J. Peters, and G. Neumann. Probabilistic Inference for Determining Options in Reinforcement Learning. Machine Learning, 104(23):337–357, 2016.
 Duan et al. (2016) Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. In Proceedings of the 33rd International Conference on International Conference on Machine Learning  Volume 48, ICML’16, pages 1329–1338. JMLR.org, 2016. URL http://dl.acm.org/citation.cfm?id=3045390.3045531.
 Florensa et al. (2017) C. Florensa, Y. Duan, and P. Abbeel. Stochastic Neural Networks for Hierarchical Reinforcement Learning. In Proc. of the International Conference on Learning Representations (ICLR), 2017.
 Jaderberg et al. (2017) M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement Learning with Unsupervised Auxiliary Tasks. In Proc. of the International Conference on Learning Representations (ICLR), 2017.
 Konidaris and Barto (2009) G. Konidaris and A. G. Barto. Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining. In Advances in Neural Information Processing Systems (NIPS), pages 1015–1023, 2009.
 Machado et al. (2017a) M. C. Machado, M. G. Bellemare, and M. H. Bowling. A laplacian framework for option discovery in reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 611 August 2017, pages 2295–2304, 2017a.
 Machado et al. (2017b) M. C. Machado, M. G. Bellemare, E. Talvitie, J. Veness, M. Hausknecht, and M. Bowling. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. CoRR, abs/1709.06009, 2017b.
 Machado et al. (2017c) M. C. Machado, C. Rosenbaum, X. Guo, M. Liu, G. Tesauro, and M. Campbell. Eigenoption Discovery through the Deep Successor Representation. CoRR, abs/1710.11089, 2017c. URL http://arxiv.org/abs/1710.11089.
 Mahadevan and Maggioni (2007a) S. Mahadevan and M. Maggioni. Protovalue Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes. Journal of Machine Learning Research (JMLR), 8:2169–2231, 2007a.
 Mahadevan and Maggioni (2007b) S. Mahadevan and M. Maggioni. Protovalue functions: A Laplacian framework for learning representation and control in Markov decision processes. Journal of Machine Learning Research, 8(Oct):2169–2231, 2007b.
 Mankowitz et al. (2016) D. J. Mankowitz, T. A. Mann, and S. Mannor. Adaptive Skills Adaptive Partitions (ASAP). In Advances in Neural Information Processing Systems (NIPS), pages 1588–1596, 2016.
 McGovern and Barto (2001) A. McGovern and A. G. Barto. Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density. In Proc. of the International Conference on Machine Learning (ICML), pages 361–368, 2001.
 Mnih et al. (2015) V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533, Feb. 2015. ISSN 00280836. URL http://dx.doi.org/10.1038/nature14236.
 Serban et al. (2017) I. V. Serban, C. Sankar, M. Germain, S. Zhang, Z. Lin, S. Subramanian, T. Kim, M. Pieper, S. Chandar, N. R. Ke, S. Mudumba, A. de Brebisson, J. M. R. Sotelo, D. Suhubdy, V. Michalski, A. Nguyen, J. Pineau, and Y. Bengio. A deep reinforcement learning chatbot. CoRR, 2017. URL https://arxiv.org/abs/1709.02349.
 Silver et al. (2016) D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, Jan 2016. ISSN 00280836. doi: 10.1038/nature16961.
 Silver et al. (2017) D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis. Mastering the game of go without human knowledge. Nature, 550(7676):354–359, Oct. 2017. ISSN 00280836. URL http://dx.doi.org/10.1038/nature24270.
 Simsek and Barto (2004) Ö. Simsek and A. G. Barto. Using Relative Novelty to Identify Useful Temporal Abstractions in Reinforcement Learning. In Proc. of the International Conference on Machine Learning (ICML), 2004.
 Sutton et al. (1999a) R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS’99, pages 1057–1063, Cambridge, MA, USA, 1999a. MIT Press. URL http://dl.acm.org/citation.cfm?id=3009657.3009806.
 Sutton et al. (1999b) R. S. Sutton, D. Precup, and S. P. Singh. Between MDPs and SemiMDPs: A Framework for Temporal Abstraction in Reinforcement Learning. Artificial Intelligence, 112(12):181–211, 1999b.
 Tesauro et al. (2013) G. Tesauro, D. Gondek, J. Lenchner, J. Fan, and J. M. Prager. Analysis of Watson’s strategies for playing Jeopardy! J. Artif. Intell. Res., 47:205–251, 2013. doi: 10.1613/jair.3834. URL https://doi.org/10.1613/jair.3834.