Autonomous exploration for navigating in non-stationary CMPs

10/18/2019 ∙ by Pratik Gajane, et al. ∙ 10

We consider a setting in which the objective is to learn to navigate in a controlled Markov process (CMP) where transition probabilities may abruptly change. For this setting, we propose a performance measure called exploration steps which counts the time steps at which the learner lacks sufficient knowledge to navigate its environment efficiently. We devise a learning meta-algorithm, MNM and prove an upper bound on the exploration steps in terms of the number of changes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The ability to quickly learn to reliably control one’s environment is core to the functionality of intelligent agents. Throughout the last decades, much work has been devoted to the design and testing of various algorithms targeted at this task, under various names such as learning using intrinsic motivation, intrinsic reward, curiosity-driven learning, etc. A necessarily incomplete sample of prior works in the area includes that of Schmidhuber (1991); Singh et al. (2004); Oudeyer and Kaplan (2007); Oudeyer et al. (2007); Baranes and Oudeyer (2009); Schmidhuber (2010); Singh et al. (2010); Lopes et al. (2012); Gottlieb et al. (2013); Stadie et al. (2015); Houthooft et al. (2016); Achiam and Sastry (2017); Ostrovski et al. (2017); Pathak et al. (2017); Haber et al. (2018); Burda et al. (2019); Azar et al. (2019); Hazan et al. (2019). Conceptually, the problem can be thought of as learning to reliably navigate an unknown environment. In this article we focus on this problem, and in particular, on learning to navigate in the face of a changing, or nonstationary environment. Following Lim and Auer (2012), we consider the case when an agent interacts with a controlled Markov process (CMP) equipped with finitely many actions and at most countably many states, the state is observable after every transition and a reset action is available which brings the agent back to some initial state. The problem then is to minimize the number of steps where the agent lacks the ability to reliably navigate to safely reachable states. Since the number of states is unbounded, the agent is given as input a ‘radius’ such that it needs to consider all states that are reachable within steps (precise definitions will be given in the next section). Lim and Auer (2012) gave an algorithm that with high probability finishes the discovery task in time that is proportional to the product of and the number of states to be discovered. Unlike this previous work, we consider the case when the transition probabilities can (abruptly) change. This setting is important as agents with a long “lifespan” may expect their environment to change: “moving parts” can suddenly break down as commonly experienced in robotics or more generally in automation (Kober et al., 2013), or the environment may change abruptly due to the appearance or disappearance of other agents, or objects, such as in rescue robots in urban search and rescue missions in unknown environments (Niroui et al., 2019). The time when the changes happen or the nature of the changes are unknown. In this new setting, we consider the problem of minimizing the number of exploration steps: A time step is considered an exploration step if at that time step the agent lacks sufficient knowledge to navigate its current environment efficiently. The challenge is of course that the agent may not be aware of when it does not have this sufficient knowledge. For this problem we give a meta-algorithm MNM which can utilize any base algorithm designed for the stationary version of the problem and which keeps the number of exploration steps below when the number of environment changes is .

Changing environments have been studied in the context of reinforcement learning (see e.g.,

Even-dar et al. (2005); Abbasi et al. (2013); Ortner et al. (2019)). However, our problem setting fundamentally differs from these works as the external rewards are absent and as such our performance metric is incomparable.

2 Problem Setting

We consider a discrete-time controlled Markov process – a Markov decision process where rewards are absent. We assume a countable, possibly infinite state space

and a finite action space with actions. Upon executing an action in state at time , the environment transitions into the next state selected randomly according to the unknown transition probabilities . In order to define the performance measure for our problem, we make use of some of the preliminary definitions and an assumption from Lim and Auer (2012) (Definitions 13 and Assumption 1 below), which assume . We assume that the reader is familiar with terminology of Markov decision processes which we borrow from.

The learning agent is expected to solve the autonomous exploration problem in which the goal is to find a policy for each reachable state from a starting state , which we will fix for the rest of the article, and hence will be omitted from any notation.

Definition 1 (Navigation time).

For any (possibly non-stationary) policy , let be the expected number of steps before reaching  for the first time when executing policy starting from .

The learner will be given a number and we may naively demand that it finds all states reachable in at most steps:

Definition 2 ().

We let denote

Since the state space might be infinite, a learner could wander off in some direction or get stuck without being able to return to the starting state. To exclude this possibility, we make the following assumption.

Assumption 1.

In every state, there is a designated action available, that will transition back to the starting state with probability .

We define a policy on to be a policy with  for any . As it turns out, in general it is too much to ask for learners to discover all the states in . Rather, following Lim and Auer (2012) we require learners to discover only the so-called incrementally discoverable states, .

Definition 3 ().

Let be some partial order on . The set of states reachable in steps with respect to  , is defined inductively as follows:

  • ,

  • if there is a policy on with , then .

The set of states reachable in steps in respect to some partial order is given by , where the union is over all possible partial orders.

Back to the nonstationary case, we define the number of changes in the environment as

For notational convenience, we assume that for some , thereby always counting the first change at . Therefore,

(1)

Next we define the performance measure we propose for the considered problem setting.

Definition 4 (Exploration steps).

The -exploration steps are the complement of the set , where contains the time steps at which the learner

  • has identified a set for the CMP with transition probabilities , and

  • has a policy for every state with for the transition probabilities .

The set of exploration steps contains the time steps for which the learner doesn’t have sufficient knowledge about the current CMP structure in order to navigate to reachable states from efficiently. The learner’s aim is to be able to efficiently navigate the current CMP structure at most of the time steps, or equivalently to minimize the number of exploration steps.

Introduction to UcbExplore(Lim and Auer, 2012): Before we illustrate our meta-algorithm using UcbExplore as a subroutine, let us take a look at a few relevant details. UcbExplore alternates between two phases: state discovery and policy evaluation. In a state discovery phase, new candidate states are discovered as potential members of the set of reachable states. In a policy evaluation phase, the optimistic policy for reaching one of the candidate states is evaluated to verify if is acceptable111By an acceptable policy, we mean any policy such that .. A policy evaluation phase for any lasts for a certain number of episodes. Each episode begins at and ends either when successfully reaches or steps have been executed. If is not reached in a suitably high number of episodes, policy evaluation for is said to have failed. A successful policy evaluation means a new reachable state and an acceptable policy have been discovered. A failed policy evaluation leads to selecting another candidate state-optimistic policy pair for evaluating while a successful policy evaluation leads to a state discovery phase which in turn adds more candidate states for the subsequent policy evaluation phases. We restate the main result of Lim and Auer (2012) below for reference.

Theorem 1.

[Lim and Auer (2012, Theorem 8)] When algorithm UcbExplore is run on a stationary CMP problem (i.e ) with inputs , , , , and , then with probability

  • it discovers a set of states ;

  • for each , it outputs a policy with , and

  • it terminates after exploration steps, where .

3 Meta-algorithm for autonomous exploration in non-stationary CMPs

1:Input: A confidence parameter , an error threshold , , , , constants and . 2:For round 3:Building phase: 4:Initialize . The set is used to store the set of initiated streams in round so far. 5:Stream handling: Let indicate the current quantum of time steps within the building phase of round . The length of is determined dynamically (explained below in step 3) but is at most . Let . 6:For [(a)] Initiation rule: For any integer , if , then add to . Initiate a new copy of UcbExplore and associate it with stream . This copy of UcbExplore acts only according to the samples taken from the time steps at which is active. Allocation rules: [label=()] If , activate the only initiated stream in so far i.e. . Otherwise if all the initiated streams in have been active for equal number of quantums previously, then least recently active stream in . Otherwise the stream in which has been active for the least number of quantums previously. If the copy of UcbExplore associated with the stream is in a state discovery phase, run it for time steps. Otherwise the copy of UcbExplore associated with the stream is in a policy evaluation phase, and then run it for an episode (which is always time steps) of policy evaluation of UcbExplore i.e.,

Check for the end of building phase: If during , the copy of UcbExplore associated with the active stream terminates and provides a set of reachable states and acceptable policies for them, record them in and respectively. Terminate all the other initiated streams in and proceed to the checking phase. Otherwise proceed to next . 7:Checking phase: 8:Compute and . Let a single check-run consist of the following two parts in the given order: a new copy of UcbExplore running for up to time steps and a policy evaluation phase of UcbExplore for each of the policies in . If the first part of any check-run doesn’t terminate within time steps, then terminate it manually and proceed to the second part of the check-run. Set . Execute check-runs. Then: 9: Let be the number of times UcbExplore has failed to terminate within time steps in the first part during the last check-runs. If
(2)
then stop the checking phase, set and start a new round, otherwise proceed to next step.
10:For every state in , let be the number of times policy evaluation fails for in the second part of the last check-runs. If
(3)
delete and from and respectively. Proceed to next step.
11:Let be any state which was absent in , but has appeared in the output of at least one of the first part of the last check-runs. For every such state , let be the number of times was present in the output of the first part of the last check-runs. If
(4)
add and the last found policy for to and respectively. Proceed to next step.
12:Execute a check-run one more time. Go back to step 9 of checking phase.

Figure 1: Meta-algorithm for autonomous exploration in non-stationary CMPs or MNM

Our meta-algorithm (Meta-algorithm for autonomous exploration in non-stationary CMPs or MNM) can use any algorithm designed for autonomous exploration in a stationary CMP as a subroutine. In Figure 1, for the sake of specificity, we describe the algorithm using UcbExplore (Lim and Auer, 2012) as a subroutine.

The algorithm proceeds in rounds and each round consists of two phases: a building phase and a checking phase. In a building phase, we build a hypothesis which consists of a set of states and an acceptable policy for each of them. In a checking phase, we check if the hypothesis we built in this round is still valid. In any building phase, the algorithm initiates several copies of the subroutine at different time steps (see 51 in Figure 1) and switches back and forth between them (see 52 in Figure 1). Once it switches to a copy of the subroutine, that subroutine is said to be to active and it remains so until the next switch. To simulate this approach, our algorithm proceeds in streams. A stream is a single run of the subroutine acting only according to the previous time-steps for which the said stream is active. At any time step, only a single stream is active. Once a stream is active it stays so for a quantum of time steps, the length of which is determined dynamically (see 53 in Figure 1). When a hypothesis is formed in the building phase of a round , it is stored in and (see 54 in Figure 1) and the algorithm moves on to the checking phase.

In the checking phase, recent history is examined, by employing a sliding window, to detect various kinds of changes in the hypothesis. When the hypothesis is found to be valid no more on account of a change, the algorithm terminates the checking phase and proceeds to the next round. In the checking phase, our algorithm employs the subroutine as a black-box using the upper bound on the exploration steps required by the subroutine for a stationary CMP problem. Using Theorem 1, the upper bound is for UcbExplore. We use this bound to compute for each round with suitable constants and .

At any time step , our algorithm’s knowledge of the current CMP structure is represented by and where the current round at . When the current round , the algorithm is yet to learn the present CMP structure.

MNM can use any algorithm designed for autonomous exploration in a stationary CMP as a subroutine if it is provided with two values:

  • the length of the quantum i.e. the number of contiguous time steps for which a copy of the subroutine (i.e., a stream) must be active, and

  • a high-probability upper bound on the number of exploration steps required by the subroutine for a stationary CMP problem.

These two values are used in Step 53 and the computation of at the beginning of a checking phase respectively (see Figure 1). Using another algorithm as a subroutine instead of UcbExplore would only cause these two changes with the rest of MNM remaining the same.

Our main result, stated in Theorem 2, upper-bounds the number of exploration steps required by MNM using UcbExplore as a subroutine. The corresponding result while using other subroutines could simply be obtained by replacing the upper bound of exploration steps required by UcbExplore for a stationary CMP with the analogous bound of the subroutine being used.

Theorem 2.

With probability , the total number of exploration steps for MNM using UcbExplore as a subroutine and with inputs , , , , and is upper bounded by

where is the number of incrementally discoverable states reachable in time steps in the CMP setting, and changes = .

Note that a change in this context affects the set of reachable states in steps from and/or the acceptable policies for reaching them. The reason, as noted by Lim and Auer (2012), is that the learner cannot distinguish between the states reachable in steps and those reachable in steps (given a reasonable amount of exploration).

Motivating factors for the construction of our algorithm

  • Before an algorithm forms a hypothesis i.e., it determines a set of reachable states and acceptable policies, it might not be possible to detect a change. Consider an algorithm still in the process of building a hypothesis. During this process, the algorithm must proceed and inspect states in some order. Suppose that it has found acceptable polices for some reachable states. When it finds a new reachable state, there are two plausible scenarios: a) this state was not reachable when the algorithm was in the process of inspecting other states earlier i.e., there was a change, or b) this state was reachable when the algorithm was in the process of inspecting other states earlier i.e., there was no change. It is not possible to distinguish between these two scenarios.

  • Since it might not be possible to detect a change during the hypothesis building phase and a change can occur at any time, the algorithm needs to start several processes during the hypothesis building phase. Each process aims to form a hypothesis for a particular CMP setting and to be able to do that, it needs to act only on the time steps for which that CMP setting is in effect. On one hand, since a change can occur at any time step, the algorithm needs to start these processes at several time steps along the way. On the other hand, if too many processes are started, each process will not get enough time to form its hypothesis. Therefore sufficient time should be allocated to each process. Both of these diverging requirements can be balanced, if both the number of processes and the time allocated to each process grow asymptotically as the square-root of time, as done in MNM.

  • Using a sliding window in the building phase is not possible: Using a sliding window in the checking phase is possible as each check-run is verifying the same hypothesis (the one found in the preceding building phase) and hence findings from successive check-runs can be shared. In the building phase however, each stream with its own copy of the subroutine might be attempting to build different hypotheses and hence findings from two different streams cannot be shared.

4 Analysis and Proof of Theorem 2

First, we bound the number of exploration steps in a single building phase. Then we prove that the number of rounds is upper-bounded by changes . Combining these two, we prove an upper bound on the number of exploration steps for all the building phases. Next, we prove an upper bound on the number of exploration steps in a checking phase caused by a single change. Summing this over all the changes in all the rounds gives us an upper bound on the number of exploration steps in all the checking phases. Finally, we add the respective upper bounds for all the building phases and all the checking phases to arrive at the bound given by Theorem 2.

4.1 Bounding the exploration steps in a single building phase

First, we state a couple of preliminary lemmas about stream handling to be used later.

Lemma 1.

At the end of any quantum in a building phase, the number of initiated streams is equal to .

The proof for Lemma 1 is given in Appendix I. Here, we provide a brief overview. We use the fact that the initiated streams is equal to the highest stream number initiated so far and the stream initiation rule (51 in Figure 1) to arrive at this claim.

Lemma 2.

At the end of a quantum for some integer ,

  1. streams have been initiated, and

  2. each initiated stream has been active for exactly quantums.

The proof for Lemma 2 is given in Appendix II. We only provide a proof sketch here. Claim 1 is a direct result of Lemma 1. Claim 2 can be proved by induction on and considering the initiation rule and allocation rules (ii) and (iii) (see 51, 522b and 522c in Figure 1 respectively).

Lemma 3.

In a round , with probability at least ,

  1. the length of the building phase is at most

  2. the building phase discovers a set of states for some CMP setting (the set of underlying CMP settings during the building phase of round ),

  3. and for each , contains a policy with for the CMP setting ,

where is the number of incrementally discoverable states reachable in time steps in the CMP , and .

Proof.

Consider the CMP setting at the start of the building phase in round . Assume that UcbExplore requires at most quantums as exploration steps for (without any change) with high probability. Theorem 1 shows that the exploration steps for the CMP setting required by UcbExplore are at most with high probability. There are two possible cases:

Case 1: The problem doesn’t change for the duration of quantums.
Our meta-algorithm initiates stream 1 at and this stream will have been active for quantums at the end of quantum (using Lemma 2). Since the problem doesn’t change for this entire duration, the copy of UcbExplore for stream 1 has samples only of . Thus, stream 1 terminates at the end of quantums of our meta-algorithm with probability and the building phase of round ends. The three claims of the lemma follow from the respective claims222The referred theorem only mentions the upper bound in terms of big-O notation. However, the constants and can be computed from its proof in Lim and Auer (2012, Section 4.6). of Theorem 1 with .

Case 2: The problem changes at any point before the end of quantums.
Let , be the successive CMP settings. Let be the required number of quantums needed by UcbExplore for each . Let be the first problem setting which doesn’t change from quantum to quantum . The stream starting at will have been active for quantums at the end of quantum (using Lemma 2). That stream will therefore terminate and output the set of reachable states and acceptable policies for at the end of quantum with probability and the building phase will terminate. The three claims of the lemma follow from the respective claims of Theorem 1 with .

4.2 Bounding the number of rounds

Lemma 4.

With probability at least , the total number of rounds .

Proof.

There is always at least one change in the building phase of the first round as the first change is counted at by default (see Section 2). For , we consider the following two mutually exclusive cases.

Case 1: There exists no round which has no change in its building phase.
In this case, every round contains at least one change and the total number of rounds is immediately upper-bounded by .

Case 2: There exists at least one round which has no change in its building phase.
Let be a round such that there is no change during its building phase. For all such rounds which contain no change in the building phase, with probability at least , we prove that the checking phase of contains at least one change.

Recall from Lemma 3 that the sole CMP setting during the building phase of round is denoted as and the building phase discovers the reachable states for with probability at least . Theorem 1 shows that for the CMP setting , if UcbExplore with run for steps, then the failure probability (i.e., the probability with which UcbExplore doesn’t terminate at the end of at most time steps) is at most where and .

The only condition to trigger the next round is given by Eq. (2). Therefore, when round ends,

where is the number of times UcbExplore has failed to stop and return a set of reachable states within time steps during the first part of the last check-runs. If the CMP setting was indeed (i.e. there was no change) in the last check-runs, then by Hoeffding’s inequality,

Therefore when the round stops, there has been a change in its checking phase with probability at least . Below we use that . With a union bound and using , we can claim that for all rounds which do not contain a change in the building phase, there is at least one change in each of their respective checking phases with probability at least .

Considering both the cases, we get that the total number of rounds is upper-bounded by the total number of changes with probability at least . ∎

4.3 Bounding the exploration steps in all the building phases

Lemma 5.

With probability at least , the total number of exploration steps in all the building phases is at most

where is the number of incrementally discoverable states reachable in time steps in the CMP setting and changes .

Proof.

We count all the steps in each building phase as exploration steps. Lemma 3 provides an upper bound on the number of exploration steps in the building phase of a single round with the error probability limited to . Therefore, the total number of exploration steps in all the building phases is at most

with error probability limited to . In the last inequality, we use that with probability (Lemma 4) and the number of different CMP settings in all the rounds is (Eq. (1)). ∎

4.4 Analyzing the checking phase

We first bound the number of exploration steps in a checking phase caused due to a single change.

Lemma 6.

With probability , the total number of exploration steps in the checking phase of a round due to a single change is at most

where is the number of incrementally discoverable states reachable in time steps in the CMP setting .

Proof.

Recall from Lemma 3 that the CMP setting for which the building phase in round has found the reachable states and acceptable policies is denoted as . Below we use that the number of time steps in a single check-run of round is upper-bounded by as from Theorem 1. Till the CMP setting is in the checking phase, the algorithm does not incur any exploration steps. For a change to , the following mutually exclusive and exhaustive cases are possible:
Case 1: doesn’t last for check-runs.
Then all the time steps for which is active are considered as exploration steps and they are are upper bounded by

Case 2: lasts for at least check-runs.
There are three possible subcases.

  1. [label=()]

  2. time steps are insufficient for .
    By insufficient we mean that

    i.e. .

    Eq.(2) verifies if change to a such that has occurred. Our algorithm keeps a count of the empirical failures in the last check-runs where a failure means that the first part of a check-run has failed to terminate within time steps (and thus had to be manually terminated at ). From Theorem 1, we know that that if no change has occurred then the true failure probability is . By Hoeffding’s inequality,

    Therefore, with probability , we detect a change to such that and the number of exploration steps added are at most

  3. { time steps are sufficient for } and {a previously reachable state becomes unreachable in or the previously acceptable policy to a reachable state is not acceptable in }.
    Eq.(3) checks for such scenarios. As it keeps verifying if the policy evaluation of succeeds in the last check-runs, it checks for both - i) if a previously reachable state is still reachable and ii) if the previously acceptable policy is still acceptable. Proceeding in a similar manner to the previous subcase, we can show that, with probability , the number of exploration steps added is at most

  4. time steps are sufficient for and a previously unreachable state becomes reachable in .
    Let’s assume that a previously unreachable state is reachable in . Either or . In the former case, policy evaluation (i.e. part of a check-run) continues to check if is still acceptable. If is found to be acceptable no more, then the check given by Eq. (3) will be triggered, the change will be detected and the number of exploration steps added are given by the previous subcase. If is still acceptable, it leads to no additional exploration steps (see Definition 4). Eq. (4) checks for scenarios where . Theorem 1 guarantees that if a state is in , the probability that it fails to appear in the output of UcbExplore is at most . For every state , but which has appeared in the output of the first part in one of the last check-runs, we can compute the empirical failures as . Then, by Hoeffding’s inequality

    Therefore, with probability at least , we detect such a change and the number of exploration steps added is at most

Considering all the cases, the number of exploration steps added is at most with probability at least . ∎

Now we can bound the number of exploration steps for all the checking phases.

Lemma 7.

With probability at least , the total number of exploration steps in all the checking phases is upper-bounded by

where is the number of incrementally discoverable states reachable in time steps in the CMP setting and changes = .

Proof.

Lemma 6 provides an upper bound on the number of exploration steps in the checking phase of a round due to a single change with error probability limited to . Due to the construction of our algorithm, only the changes in round can lead to exploration steps in the checking phase of round . Let be the number of changes in round . Then, the total number of exploration steps are at most

with error probability limited to . In the first inequality, we use that with probability (Lemma 4) and . ∎

4.5 Proof of Theorem 2

Proof.

The total number of exploration steps in all the rounds is simply the sum of the exploration steps in all the building phases and all the checking phases given by Lemma 5 and Lemma 7 respectively. Therefore the number of total exploration steps for all the rounds is at most

with probability at least using a union bound. ∎

5 Concluding remarks

We considered the problem of learning to explore autonomously in a non-stationary environment and proposed a pertinent performance measure. We gave a natural algorithm for the considered problem and proved an upper bound on the performance measure that scales with the square of the number of changes.

Proving a lower bound for this problem setting remains for future work. The solution strategy of first having a building phase (with multiple processes trying to build a hypothesis) and then a checking phase (where it is verified if the last built hypothesis is still true) could be used for other non-stationary learning problems. In particular, this strategy could be useful for the learning problems where each hypothesis building-process needs to act independently and cannot share findings.

References

  • Abbasi et al. (2013) Yasin Abbasi, Peter L Bartlett, Varun Kanade, Yevgeny Seldin, and Csaba Szepesvari.

    Online learning in Markov decision processes with adversarially chosen transition probability distributions.

    In Advances in Neural Information Processing Systems 26, pages 2508–2516, 2013.
  • Achiam and Sastry (2017) Joshua Achiam and Shankar Sastry. Surprise-based intrinsic motivation for deep reinforcement learning. CoRR, abs/1703.01732, 2017.
  • Azar et al. (2019) Mohammad Gheshlaghi Azar, Bilal Piot, Bernardo A. Pires, Jean-Bastien Grill, Florent Altché, and Rémi Munos. World discovery models. CoRR, abs/1902.07685, 2019.
  • Baranes and Oudeyer (2009) A. Baranes and P.-Y. Oudeyer.

    R-IAC: Robust intrinsically motivated exploration and active learning.

    IEEE Transactions on Autonomous Mental Development, 1:155–169, 2009.
  • Burda et al. (2019) Yuri Burda, Harrison Edwards, Deepak Pathak, Amos J. Storkey, Trevor Darrell, and Alexei A. Efros. Large-scale study of curiosity-driven learning. In ICLR, 2019. URL https://openreview.net/forum?id=rJNwDjAqYX.
  • Even-dar et al. (2005) Eyal Even-dar, Sham M Kakade, and Yishay Mansour. Experts in a Markov decision process. In Advances in Neural Information Processing Systems, pages 401–408, 2005.
  • Gottlieb et al. (2013) Jacqueline Gottlieb, Pierre-Yves Oudeyer, Manuel Lopes, and Adrien Baranes. Information-seeking, curiosity, and attention: computational and neural mechanisms. Trends in cognitive sciences, 17(11):585–593, 2013.
  • Haber et al. (2018) Nick Haber, Damian Mrowca, Stephanie Wang, Li F Fei-Fei, and Daniel L Yamins. Learning to play with intrinsically-motivated, self-aware agents. In Advances in Neural Information Processing Systems, pages 8388–8399, 2018.
  • Hazan et al. (2019) Elad Hazan, Sham Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy exploration. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, ICML, volume 97, pages 2681–2691, 2019.
  • Houthooft et al. (2016) Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Variational information maximizing exploration. In

    NIPS 2016 Deep Learning Symposium

    , 2016.
  • Kober et al. (2013) Jens Kober, J. Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238–1274, 2013.
  • Lim and Auer (2012) Shiau Hong Lim and Peter Auer. Autonomous exploration for navigating in MDPs. In Proceedings of the 25th Annual Conference on Learning Theory, volume 23 of

    Proceedings of Machine Learning Research

    , pages 40.1–40.24, 2012.
  • Lopes et al. (2012) Manuel Lopes, Tobias Lang, Marc Toussaint, and Pierre-Yves Oudeyer.

    Exploration in model-based reinforcement learning by empirically estimating learning progress.

    In Advances in neural information processing systems, pages 206–214, 2012.
  • Niroui et al. (2019) F. Niroui, K. Zhang, Z. Kashino, and G. Nejat. Deep reinforcement learning robot for search and rescue applications: Exploration in unknown cluttered environments. IEEE Robotics and Automation Letters, 4(2):610–617, 2019.
  • Ortner et al. (2019) Ronald Ortner, Pratik Gajane, , and Peter Auer. Variational regret bounds for reinforcement learning. In

    Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence

    , 2019.
  • Ostrovski et al. (2017) Georg Ostrovski, Marc G Bellemare, Aäron van den Oord, and Rémi Munos. Count-based exploration with neural density models. In Proceedings of the 34th International Conference on Machine Learning, pages 2721–2730, 2017.
  • Oudeyer et al. (2007) P-Y. Oudeyer, F. Kaplan, and V.V. Hafner. Intrinsic motivation systems for autonomous mental development.

    IEEE Transactions on Evolutionary Computation

    , 11:265–286, 2007.
  • Oudeyer and Kaplan (2007) Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? a typology of computational approaches. Frontiers in neurorobotics, 1, 2007.
  • Pathak et al. (2017) Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , pages 16–17, 2017.
  • Schmidhuber (2010) J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990–2010). Autonomous Mental Development, IEEE Transactions on, 2:230–247, 2010.
  • Schmidhuber (1991) Jürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats, pages 222–227. MIT Press, 1991.
  • Singh et al. (2004) Satinder P. Singh, Andrew G. Barto, and Nuttapong Chentanez. Intrinsically motivated reinforcement learning. In NIPS, 2004.
  • Singh et al. (2010) Satinder P. Singh, Richard L. Lewis, Andrew G. Barto, and Jonathan Sorg. Intrinsically motivated reinforcement learning: An evolutionary perspective. IEEE T. Autonomous Mental Development, 2:70–82, 2010.
  • Stadie et al. (2015) Bradly C. Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. CoRR, abs/1507.00814, 2015.

References

  • Abbasi et al. (2013) Yasin Abbasi, Peter L Bartlett, Varun Kanade, Yevgeny Seldin, and Csaba Szepesvari.

    Online learning in Markov decision processes with adversarially chosen transition probability distributions.

    In Advances in Neural Information Processing Systems 26, pages 2508–2516, 2013.
  • Achiam and Sastry (2017) Joshua Achiam and Shankar Sastry. Surprise-based intrinsic motivation for deep reinforcement learning. CoRR, abs/1703.01732, 2017.
  • Azar et al. (2019) Mohammad Gheshlaghi Azar, Bilal Piot, Bernardo A. Pires, Jean-Bastien Grill, Florent Altché, and Rémi Munos. World discovery models. CoRR, abs/1902.07685, 2019.
  • Baranes and Oudeyer (2009) A. Baranes and P.-Y. Oudeyer.

    R-IAC: Robust intrinsically motivated exploration and active learning.

    IEEE Transactions on Autonomous Mental Development, 1:155–169, 2009.
  • Burda et al. (2019) Yuri Burda, Harrison Edwards, Deepak Pathak, Amos J. Storkey, Trevor Darrell, and Alexei A. Efros. Large-scale study of curiosity-driven learning. In ICLR, 2019. URL https://openreview.net/forum?id=rJNwDjAqYX.
  • Even-dar et al. (2005) Eyal Even-dar, Sham M Kakade, and Yishay Mansour. Experts in a Markov decision process. In Advances in Neural Information Processing Systems, pages 401–408, 2005.
  • Gottlieb et al. (2013) Jacqueline Gottlieb, Pierre-Yves Oudeyer, Manuel Lopes, and Adrien Baranes. Information-seeking, curiosity, and attention: computational and neural mechanisms. Trends in cognitive sciences, 17(11):585–593, 2013.
  • Haber et al. (2018) Nick Haber, Damian Mrowca, Stephanie Wang, Li F Fei-Fei, and Daniel L Yamins. Learning to play with intrinsically-motivated, self-aware agents. In Advances in Neural Information Processing Systems, pages 8388–8399, 2018.
  • Hazan et al. (2019) Elad Hazan, Sham Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy exploration. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, ICML, volume 97, pages 2681–2691, 2019.
  • Houthooft et al. (2016) Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Variational information maximizing exploration. In

    NIPS 2016 Deep Learning Symposium

    , 2016.
  • Kober et al. (2013) Jens Kober, J. Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238–1274, 2013.
  • Lim and Auer (2012) Shiau Hong Lim and Peter Auer. Autonomous exploration for navigating in MDPs. In Proceedings of the 25th Annual Conference on Learning Theory, volume 23 of

    Proceedings of Machine Learning Research

    , pages 40.1–40.24, 2012.
  • Lopes et al. (2012) Manuel Lopes, Tobias Lang, Marc Toussaint, and Pierre-Yves Oudeyer.

    Exploration in model-based reinforcement learning by empirically estimating learning progress.

    In Advances in neural information processing systems, pages 206–214, 2012.
  • Niroui et al. (2019) F. Niroui, K. Zhang, Z. Kashino, and G. Nejat. Deep reinforcement learning robot for search and rescue applications: Exploration in unknown cluttered environments. IEEE Robotics and Automation Letters, 4(2):610–617, 2019.
  • Ortner et al. (2019) Ronald Ortner, Pratik Gajane, , and Peter Auer. Variational regret bounds for reinforcement learning. In

    Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence

    , 2019.
  • Ostrovski et al. (2017) Georg Ostrovski, Marc G Bellemare, Aäron van den Oord, and Rémi Munos. Count-based exploration with neural density models. In Proceedings of the 34th International Conference on Machine Learning, pages 2721–2730, 2017.
  • Oudeyer et al. (2007) P-Y. Oudeyer, F. Kaplan, and V.V. Hafner. Intrinsic motivation systems for autonomous mental development.

    IEEE Transactions on Evolutionary Computation

    , 11:265–286, 2007.
  • Oudeyer and Kaplan (2007) Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? a typology of computational approaches. Frontiers in neurorobotics, 1, 2007.
  • Pathak et al. (2017) Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , pages 16–17, 2017.
  • Schmidhuber (2010) J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990–2010). Autonomous Mental Development, IEEE Transactions on, 2:230–247, 2010.
  • Schmidhuber (1991) Jürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats, pages 222–227. MIT Press, 1991.
  • Singh et al. (2004) Satinder P. Singh, Andrew G. Barto, and Nuttapong Chentanez. Intrinsically motivated reinforcement learning. In NIPS, 2004.
  • Singh et al. (2010) Satinder P. Singh, Richard L. Lewis, Andrew G. Barto, and Jonathan Sorg. Intrinsically motivated reinforcement learning: An evolutionary perspective. IEEE T. Autonomous Mental Development, 2:70–82, 2010.
  • Stadie et al. (2015) Bradly C. Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. CoRR, abs/1507.00814, 2015.

Appendix I Proof of Lemma 1

Proof.

The number of initiated streams is equal to the highest stream number initiated so far. Let that be . Since is initiated on or before , (see 51 in Figure 1) which is equivalent to,

(5)

Since has not been initiated yet, which translates to,

(6)

Recall that both and are integers 1. If is a perfect square, the only integer satisfying both Eq. and is . If is not a perfect square, then Eq. reduces to . And the only integer satisfying is . ∎

Appendix II Proof of Lemma 2

Proof.

Claim 1 is a direct result of Lemma 1. We prove claim 2 by induction on . Base case: . At the end of , only stream has been initiated and it has been active for quantum.
Inductive case: Let’s assume that the claim is true for i.e at the end of quantum , exactly streams have been initiated and each of them has been active for quantums. At the next quantum i.e , stream will be initiated by the initiation rule and it will be active for the next quantums due to the allocation rule (ii). At this point, we are at the end of quantum and all the initiated streams have each been active for quantums. Next, by virtue of the allocation rule (iii), each of the streams will be allocated quantum each till we are the end of quantum . ∎