Coordinated Exploration in Concurrent Reinforcement Learning

02/05/2018 ∙ by Maria Dimakopoulou, et al. ∙ Stanford University 0

We consider a team of reinforcement learning agents that concurrently learn to operate in a common environment. We identify three properties - adaptivity, commitment, and diversity - which are necessary for efficient coordinated exploration and demonstrate that straightforward extensions to single-agent optimistic and posterior sampling approaches fail to satisfy them. As an alternative, we propose seed sampling, which extends posterior sampling in a manner that meets these requirements. Simulation results investigate how per-agent regret decreases as the number of agents grows, establishing substantial advantages of seed sampling over alternative exploration schemes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The field of reinforcement learning treats the design of agents that operate in uncertain environments and learn over time to make increasingly effective decisions. In such settings, an agent must balance between accumulating near-term rewards and probing to gather data from which it can learn to improve longer-term performance. A substantial literature, starting with (Kearns and Singh, 2002), has developed reinforcement learning algorithms that address this trade-off in a provably efficient manner.

Until recently, most provably efficient exploration algorithms (e.g., (Jaksch et al., 2010)) have been based on upper-confidence-bound approaches. Over the past few years, new approaches that build on and extend PSRL (posterior sampling for reinforcement learning) (Strens, 2000) have proved advantageous in terms of statistical efficiency (Osband et al., 2013; Osband and Van Roy, 2017a, b, 2014a, 2014b).

PSRL operates in a simple and intuitive manner. An agent learns over episodes of interaction with an uncertain environment, modeled as a Markov decision process (MDP). The agent is uncertain about the transition probabilities and rewards of the MDP, and refines estimates as data is gathered. At the start of each episode, the agent samples an MDP from its current posterior distribution. This sample can be thought of as a random statistically plausible model of the environment given the agent’s initial beliefs and data gathered up to that time. The agent then makes decisions over the episode as though the environment is accurately modeled by the sampled MDP.

In concurrent reinforcement learning (Silver et al., 2013; Pazis and Parr, 2013; Guo and Brunskill, 2015; Pazis and Parr, 2016), multiple agents interact with the same unknown environment, share data with one another, and learn in parallel. One might consider two straightforward extensions of PSRL to the multi-agent setting. In one, at the start of each episode, each agent samples an independent MDP from the current posterior, which is conditioned on all data accumulated by all agents. Then, over the course of the episode, each agent follows the decision policy that optimizes its sampled MDP. The problem with this approach is that each agent does not benefit over the duration of the episode from the potentially vast quantities of data gathered by his peers. An alternative – which we will refer to as Thompson resampling – would be to have each agent independently sample a new MDP at the start of each time period within the episode, as done in (Kim, 2017). The new sample would be from a posterior distribution additionally conditioned on data gathered by all agents up to the time. However, as discussed in (Russo et al., 2017), this naive extension is disruptive to the agents’ ability to explore the environment thoroughly. In particular, an agent may have to apply a coordinated sequence of actions over multiple time periods in order to adequately probe the environment. When MDPs are resampled independently over time periods, agents are taken off course.

The two naive approaches we have discussed highlight potentially conflicting needs to adapt to new information and to maintain the intent with which an agent started exploring the environment. Efficient coordinated exploration calls for striking the right balance. In this paper, we present a variation of PSRL – seed sampling – that accomplish es this.

To focus on the issue of coordinated exploration, we consider a single-episode reinforcement learning problem in which multiple agents operate, making decisions and progressing asynchronously. In this context, we study the rate at which per-agent regret vanishes as the number of agents increases. Through this lens, we demonstrate that seed sampling coordinates exploration in an efficient manner and can dramatically outperform other approaches to concurrent reinforcement learning. In particular, the rate at which regret decays appears to be robust across problems, while for each alternative, there are problems where regret decays at a far slower rate.

Guo and Brunskill (2015), Pazis and Parr (2013), and Pazis and Parr (2016) have proposed and studied UCB exploration schemes for concurrent reinforcement learning. Advantages of PSRL over UCB in single-agent contexts by themselves motivate extension to concurrent reinforcement learning. A possibly more important motivating factor, however, is that, as our results demonstrate, UCB approaches sometimes do not coordinate exploration in an effective manner. The issue is that UCB approaches are deterministic, and as such, they do not diversify agent behaviors to effectively divide and conquer when there are multiple facets of the environment to explore.

A broad range of applications calls for concurrent reinforcement learning. Many examples can be found in web services, where each user can be served by an agent that shares data with and learns from the experiences of other agents. Through coordinated exploration, the agents can efficiently learn to better serve the population of users. The control of autonomous vehicles presents another important context. Here, each agent manages a single vehicle, and again, the agents learn from each other as data is gathered. The goal could be to optimize a combination of metrics, such as fuel consumption, safety, and satisfaction of transportation objectives. Exploratory actions play an important role, and structured diversity of experience may greatly accelerate learning. The seed sampling algorithms we propose in this paper aim to structure such exploration in a systematic and robust manner.

2 Problem Formulation

Consider a time-homogeneous, single-episode MDP, which is identified by , where is the finite state space, is the finite action space, is the reward model, is the transition model, is the initial state distribution and is the horizon.

Consider agents, who explore and learn to operate in parallel in this common environment. Each th agent begins at state and takes an action at arrival times of an independent Poisson process with rate . At time , the agent takes action , transitions from state to state and observes reward . The agents are uncertain about the transition structure and/or the reward structure , over which they share common priors. There is clear value in sharing data across agents, since there is commonality across what agents aim to learn. Agents share information in real time and update their posterior, so that when selecting an action at time , the th agent can base his decision on observations made by all agents prior to that time.

Denote as

. We will define all random variables with respect to a filtered probability space

. As a convention, variables indexed by are -measurable and therefore, variables indexed by are -measurable.

The total reward accrued by the agents is and the expected mean regret per agent is defined by

where is the optimal reward.

We now consider some examples that illustrate this problem formulation.

Example 1 (Maximum Reward Path).

Consider an undirected graph with vertices and edges . The probability of any two vertices being connected is . Let

be the vector of edge weights. We treat

as a vector with an understanding that edges are sorted according to a fixed but arbitrary order. The state space is the set of vertices and the action space from each vertex is the set of edges incident to . When action is taken from state , the agent transitions deterministically to state and observes reward , which is a noisy observation of the weight of edge , such that . The agents are uncertain about the edge weights and share a common prior over . Denote as the th edge of the th agent’s path traversed at time . For each the agent observes a reward , distributed according to . The agents start from the same vertex . The objective is, starting from vertex , to traverse the path that maximizes , i.e., to find the maximum reward path from vertex with exactly edges.

Example 2 (Bipolar Chain).

Consider the directed graph of Figure 1. The chain has an even number of vertices, , . The endpoints of the chain are absorbing. The set of edges is . The leftmost edge has weight and the rightmost edge has weight , such that and . All other edges have weight . The agents do not know whether or and they share a common prior that assigns probability to either scenario. Each one of the agents starts from vertex . Denote as the edge traversed at the th step of the th agent’s path and the respective weight. Further, denote as , the time at which the th agent reaches either endpoint with the th traversal being the last one in the agent’s path. The th agent’s objective is to maximize . The optimal reward is if and if because the leftmost endpoint is one vertex further from the start than the rightmost endpoint and requires the traversal of one more penalizing edge with weight .

Figure 1: Graph of “Bipolar Chain” example
Example 3 (Parallel Chains).

Consider the directed graph of Figure 2. Starting from vertex , each of the agents chooses one of the chains. Once a chain is chosen, the agent cannot switch to another chain. All the edges of each chain have zero weights, apart from the edge incoming to the last vertex of the chain, which has weight . Let be the vector of these edge weights for the chains. The agents are uncertain about , over which they share a common prior. Denote as the chain chosen by the th agent. When traversing the last edge at time , the agent observes reward distributed according to . For all other transitions at times , , the th agent observes reward . The objective is to choose the chain with the maximum reward.

Figure 2: Graph of “Parallel Chains” example

3 Algorithms

Three properties are necessary for efficient coordinated exploration in concurrent reinforcement learning:

Property 1 (Adaptivity).

Adapt as data becomes available to make effective use of new information.

Property 2 (Commitment).

Maintain the intent to carry out probing action sequences that span multiple periods.

Property 3 (Diversity).

Divide-and-conquer learning opportunities among agents.

As we discuss in Section 3.1, straightforward extensions of provably efficient single-agent reinforcement learning algorithms fail to meet these requirements. In Section 3.2, we introduce the concept of seed sampling, which leads to algorithms that simultaneously satisfy these three properties.

All algorithms we consider share some common structure, which we will now describe. The concurrent agents share a prior distribution of the MDP . Denote by the posterior distribution, given the history of observations available up to time . At each time , the agent generates an MDP , computes the optimal policy for , takes a single action , transitions to state and observes reward . The observation is used to update the shared posterior distribution of . Therefore, at time , the th agent can use the knowledge gained from this observation in order to take his th action. The key difference between the studied algorithms is how each th agent forms his MDP at time .

3.1 Baseline Algorithms

First, we discuss the straight-forward adaptation of provably efficient single-agent reinforcement learning algorithms to the concurrent reinforcement learning setting. However, neither of these baselines achieve coordinated exploration in concurrent reinforcement learning, either because the agents, when adapting to new information, do not maintain the level of intent required to ensure thorough exploration or because the agents do not diversify their exploratory effort in a manner that mutually benefits their common learning.

3.1.1 Thompson Resampling

At time , the th agent samples MDP from the posterior . If at time of the th action of the th agent and at time of the th action of the th agent the posterior is the same, , the th agent and the th agent will form a different MDP. Therefore, the agents will diversify their exploration efforts. However, resampling an MDP independently at each time period may break the agent’s commitment to a sequence of actions that extend over multiple time periods. This commitment is necessary for learning in an environment with delayed consequences, and hence the learning performance may suffer.

To demonstrate the importance of Property 2 (Commitment), consider the Bipolar Chain example of Section 2. Assume that the th agent samples an MDP at time , in which the left-most edge is positive and the right-most edge is negative. Therefore, the th agent decides to move left at . When the th agent re-samples an MDP at time , the left-most edge may now be negative and the right-most edge may now be positive due to randomness, even if no other agent has gathered information to warrant this change in the sampled MDP. As a consequence, the th agent moves right at , undoing his previous move, incurring unnecessary cost and most importantly delaying the traversal of either the left-most or the right-most edge, which would produce information valuable for all agents.

3.1.2 Concurrent UCRL

At time , the th agent forms confidence bounds for the reward structure and the transition structure that define the set of statistically plausible MDPs given the posterior . The th agent chooses the MDP that maximizes the achievable average reward subject to these confidence bounds. This algorithm is deterministic and does not suffer from the flaw of Thompson resampling. Note, however, that if at time of the th action of the th agent and at time of the th action of the th agent the posterior is the same, , the th agent and the th agent will form the same MDP. Therefore, the agents may not always diversify their exploration efforts.

To demonstrate the importance Property 3 (Diversity), consider the Parallel Chains example of Section 2 and assume that the parallel chains’ last edge weights, , are independent. Further, assume that for any pair of chains , , the prior means of , are the same,

, but the prior variances of

, differ, . Then, UCRL will direct all agents to the chain with the maximum prior variance and will not diversify exploratory effort to the other chains. As the horizon gets larger, the learning performance benefits less and less from an increased number of parallel agents and the expected mean regret per agent does not improve due to lack of diversity.

3.2 Seed Sampling Algorithms

We now present the concept of seed sampling, which offers an approach to designing efficient coordinated exploration algorithms that satisfy the three aforementioned properties. The idea is that each concurrent agent independently samples a random seed, such that the mapping from seed to MDP is determined by the prevailing posterior distribution. Independence among seeds diversifies exploratory effort among agents (Property 3). If the mapping is defined in an appropriate manner, the fact that the agent maintains a consistent seed leads to a sufficient degree of commitment (Property 2), while the fact that the posterior adapts to new data allows the agent to react intelligently to new information (Property 1).

In the subsections that follow, we discuss ways to define the mapping from seed and posterior distribution to sample. Note that the mappings we present represent special cases that apply to specific problem classes. The idea of seed sampling is broader and can be adapted to other problem classes, as we will explore further in Section 4.

Let be a deterministic function mapping a seed to MDP , such that . At time , the deterministic function mapping is generated based on and the history of observations available up to this time. At the beginning of the episode, each th agent samples seed . At time , the th agent samples an MDP according to . The intuition behind the seed sampling algorithms is that each agent forms its own sample of the MDP at each time period, which is distributed according to the posterior over based on all agents’ observations, while the randomness injected to the agent’s samples remains fixed throughout the horizon, allowing the agent to maintain the necessary commitment to action sequences that span multiple time periods.

3.2.1 Exponential-Dirichlet Seed Sampling

The agents are uncertain about the transition structure over which they hold a common Dirichlet prior . The prior over the transition probabilities associated with each state-action pair is Dirichlet-distributed with parameters , for . The Dirichlet parameters are incremented upon each state transition and at time , the posterior given the history of observations is . At time , the transition probabilities from the state-action pair is Dirichlet-distributed with parameters , for . At the beginning of the episode, each th agent samples sequences of independent and identically distributed seeds such that . The mapping from seed to transition structure is defined as . Then, each in is Dirichlet distributed with parameters due to the fact that (a) if are independently distributed Gamma random variables with shape parameters , then with is -dimensional Dirichlet distributed with parameters , and (b) any Gamma with shape parameter can be represented as the sum of Exp(1) random variables (Gentle, 2013). The transition structure of the sampled MDP of the th agent at time is given by .

3.2.2 Standard-Gaussian Seed Sampling

The agents are uncertain about the parameters of the reward structure, over which they share a common normal or lognormal prior with parameters and . The posterior over at time , given the history of observations available up to this time, is and is normal or lognormal with parameters and . In either case, conjugacy properties result in simple rules for updating the posterior distribution’s parameters upon each observation in . Consider Example 1 and Example 3 of Section 2. At the beginning of the episode, each th agent samples seed . In the case of normal prior, as in Example 3 (Parallel Chains), the mapping from seed to the reward structure’s parameters is defined as , where is the positive definite matrix such that . Then, is a multivariate normal with mean vector and covariance matrix (Gentle, 2009). In the case of lognormal prior, as in Example 1 (Maximum Reward Path), the mapping from seed to the reward structure’s parameters is , where is defined as before. Similarly, is a multivariate lognormal with parameters and . The reward structure of the sampled MDP of the th agent at time has parameters .

3.2.3 Martingalean-Gaussian Seed Sampling

The agents are uncertain about the parameters of the reward structure, over which they share a common normal or lognormal prior with parameters and . Define seed with distribution such that and is an IID sequence of . At time , the history up to this time consists of observations . The deterministic mapping from seed to reward structure parameters is a model fit to the sample from the prior and the observations in randomly perturbed by . In the case of normal prior, is an IID sequence of and . The mapping at time from seed to the reward structure’s parameters is defined as , where is the one-hot vector , whose positive element corresponds to the state-action pair of the th observation in , is the reward of the th observation in and is a component of the seed which corresponds to the perturbation of the reward of the th observation in . In the case of lognormal prior, is an IID sequence of and . The mapping at time from seed to the reward structure’s parameters is similar as in the normal case, but instead of fitting to the rewards , we fit to .

Consider again Example 3 (Parallel Chains) and Example 1 (Maximum Reward Path). At the beginning of the episode, each th agent samples seed distributed according to . At time , the th agent generates , which is a model fit to his sample from the prior and to the observations in the history perturbed by , where is the matrix whose th row is , is the vector whose th element is in the normal prior case and in the lognormal prior case and is the vector whose th element is in the normal prior case and is in the lognormal prior case.

Proposition 1.

Conditioned on , and , is distributed according to the posterior of .

Proposition 2.

For each agent , denote as and consider a probability measure defined on , for which is deterministic, is distributed . Then, is a martingale with respect to .

Proposition 1 follows from Lemma 4 of (Lu and Van Roy, 2017) and is core to sampling an MDP that follows the posterior distribution based on the data gathered by all agents (Property 1). Proposition 2 follows from the definitions and motivates the name of this seed sampling algorithm.

4 Computational Results

In this section, we present computational results that demonstrate the robustness of seed sampling algorithms of Section 3.2 versus the baseline algorithms of Section 3.1. In sections 4.1 and 4.2, we present two simple problems that highlight the weaknesses of concurrent UCRL and Thompson resampling and demonstrate how severely performance may suffer due to violation of any among Properties 1, 2, 3. In Section 4.3, we demonstrate the relative efficiency of seed sampling in a more complex problem.

4.1 Bipolar Chain

Consider the directed graph of Figure 1 and the description of Example 2 in Section 2. The agents’ objective is to maximize the accrued reward. However, the agents do not know whether the leftmost edge has weight or whether the rightmost edge has weight . The agents share a common prior that assigns equal probability to either scenario. When any of the agents traverses or for the first time, all agents learn the true values of , . Denote as the time when the true MDP is revealed. The horizon is . The horizon is selected in such a way, so that if an agent picks the wrong direction and moves in every time period towards the wrong endpoint, if the true values of , are revealed before the wrong endpoint is reached, this agent has enough time to correct the trajectory and reach the correct endpoint. The optimal reward is if and if because the leftmost endpoint is one vertex further from the start than the rightmost endpoint and requires the traversal of one more penalizing edge with weight . We now examine how seed sampling, concurrent UCRL and Thompson resampling behave in this setting.

In seed sampling, each th agent samples a seed , which remains fixed for the entire duration of the episode. The mapping from seed to sampled MDP at time is determined by and which are defined as , . After one of the agents traverses or , the sampled MDP of each th agent who has not terminated is the true MDP, , satisfying Property 1. Note that in seed sampling, among the agents who start before the true MDP is revealed, i.e., , half go left and half go right in expectation, satisfying Property 3. Thanks to the seed the sampled MDP remains fixed in every step of the th agent’s trajectory until the true MDP is learned. Therefore, all agents commit to reaching either the left or the right endpoint of the chain depending on the seed they sampled, until the correct direction is discovered by one of the agents, satisfying Property 2. When the correct direction is revealed, the horizon allows all agents who have picked the wrong direction but have not yet terminated to change their trajectory and eventually to reach the correct endpoint.

In concurrent UCRL, the agents are initially optimistic that they can achieve the maximum attainable reward, which is in the scenario that the rightmost edge (i.e., the closest one to the start) has the positive weight, . Each th agent at time chooses an MDP that is defined as , . Therefore, all agents who start before the true MDP is revealed, i.e., go right, violating Property 3. Note that, in this particular example, diversification is not essential to exploration, since going towards a single direction will still reveal the true MDP. When the correct direction is revealed and it is not the rightmost endpoint, all agents who have not terminated change their trajectory and eventually reach the correct endpoint. If the correct direction is the rightmost endpoint, no agent has to change trajectory.

In Thompson resampling, each th agent at time samples an MDP which is defined as , . Note that for two subsequent time periods , the sampled MDP of the th agent may differ due to randomness in drawing a sample each time. Therefore, the th agent who decided to go towards one direction at time may change his decision and go towards the opposite direction at time , violating Property 2. In this setting, violation of Property 2 is detrimental to exploration. The horizon of each one of the agents is consumed to meaningless oscillations and it is very difficult for any agent to reach either endpoint of the chain. The larger the number of vertices in the chain is, the more unlikely becomes for any agent to ever discover the true MDP.

Figure 3 shows the mean regret per agent for

number of vertices in the chain as the number of concurrent agents increases. The figure presents averages over hundreds of simulations. As the number of agents increases, in seed sampling and concurrent UCRL more and more agents have not yet started their trajectory or moved further away from the start towards the wrong direction the moment the true MDP is revealed. Therefore the mean regret per agent decreases. On the other hand, in Thompson resampling, the lack of commitment to exploring either direction prevents the true MDP to be discovered before the horizon expires, even for a very large number of agents. As a result the mean regret per agent does not improve.

Figure 3: Performance of the algorithms of Section 3 in the “Bipolar Chain” example with vertices and horizon in terms of mean regret per agent as the number of agents increases.

4.2 Parallel Chains

Consider the directed graph of Figure 2 and the description of Example 3 in Section 2. The agents’ objective is to maximize the accrued reward by choosing the chain whose last edge has the largest weight . Recall that the weight of the last edge of chain is denoted as and When the last edge of chain is traversed, the reward is a noisy observation of such that . However, the agents do not know the true value and they share a common, well-specified prior on it. Assume that all the , are independent and that the prior on the th chain’s last edge weight is . Further assume that , the prior mean is the same, , and the prior variance increases as we move from higher to lower chains, .

In this setting, martingalean-Gaussian seed sampling, standard-Gaussian seed sampling and Thompson resampling are expected to have identical performance. Thanks to sampling the seeds independently, the martingalean-Gaussian seed sampling and standard-Gaussian seed sampling agents construct MDPs in which different chains appear to be optimal. This is also the case for Thompson resampling agents, who sample their MDPs independently from the prior. As a result, the martingalean-Gaussian seed sampling, standard-Gaussian seed sampling and Thompson resampling agents are directed to different chains and satisfy Property 3. Note that, unlike martingalean-Gaussian seed sampling and standard-Gaussian seed sampling, Thompson resampling does not satisfy Property 2 but in this setting this does not impact the learning performance. A Thompson resampling agent may draw an MDP at time in which chain is optimal, but due to resampling at time his belief may change and another chain may appear to be optimal. However, since transitions from one chain to another are not possible in the directed graph of Figure 2, the agent has no choice but exploring his initial choice, which is chain . Even if inherently Thompson resampling lacks the commitment of martingalean-Gaussian seed sampling and standard-Gaussian seed sampling, the structure of the problem forces the Thompson resampling agents to maintain intent and perform equally well.

On the other hand, the concurrent UCRL agents are optimistic that they can achieve the maximum attainable reward and they are all directed to chain for which the upper confidence bound of the weight of the last edge is the largest. Once enough agents have traversed the last edge of chain and the posterior variance on becomes lower than the prior variance on , the optimistic driven behavior directs the agents who have not left the source to chain . As long as there are agents who have not left the source, this way of exploration repeats until some agents are directed to chain . The violation of Property 3 leads to a wasteful allocation of the agents’ exploratory effort, as all agents are directed to gather similar information. This is detrimental to learning performance, as an agent with a later activation time will not have all the information to make the optimal choice of chain that he could have made if the agents who started before him had explored all the chains.

Consider the specification of the problem with chains, horizon (or equivalently number of vertices in each chain) , , and likelihood of observed reward when the last edge of chain is traversed .

Figure (a)a shows the mean regret per agent achieved by the algorithms as the number of agents increases and Figure (b)b shows the cumulative regret of concurrent agents, with the agents ordered in ascending activation time . Both figures present averages over hundreds of simulations.

The figures demonstrate that UCB approaches to concurrent reinforcement learning do not efficiently coordinate, and as a result, performance may suffer severely. In order for concurrent UCRL to achieve the same mean regret per agent that martingalean-Gaussian seed sampling, standard-Gaussian seed sampling and Thompson resampling achieve with 100 agents, 100,000 agents are required.

(a) Mean regret per agent as the number of agents increases.

(b) Cumulative regret of 100,000 concurrent agents ordered in ascending activation time.
Figure 4: Performance of the algorithms of Section 3 in the “Parallel Chains” example with chains, number of vertices per chain, , .

4.3 Maximum Reward Path

We now present the performance of the algorithms in a more complex problem. Consider the description of Example 1. The agents start from the same vertex and their goal is to make edge traversals that will return the highest reward. Initially, the agents do not know the edge weights. The edge weights are assumed to be independent and the agents share a common prior over , . Every time edge is traversed, the observed reward is distributed according to and the common posterior of all agents is updated according to and . The th agent, at time , , constructs MDP and from vertex computes the maximum reward path of steps.

In Thompson resampling, the th agent’s sampled MDPs at time and time may differ significantly. Part of this difference is due to the fact that for some edges observations were made between and . However, and may also have different weights for edges that were not traversed between and due to randomness. Therefore, the th agent may be enticed to redirect towards an edge which has a large weight in but did not have a large weight in , even if this change in beliefs is not substantiated by true observations. As a result, Thompson resampling agents violate Property 2 and are susceptible to myopic behavior, which harms the agents’ ability to explore deep in the graph in order to identify the maximum reward path of fixed length.

In concurrent UCRL, the agents are immune to the distractions suffered by Thompson resampling agents, as they construct deterministic upper confidence bounds from the common posteriors on the edge weights. However, the th agent at time and the th agent at time have identical beliefs on all the edges that have not been observed between and . Therefore, the path chosen by th agent may be very similar or even identical to the path chosen by th agent. This lack of diversity (Property 3) delays the exploration of the graph’s edges and the identification of the maximum reward path of fixed length.

In standard-Gaussian seed sampling or martingalean-Gaussian seed sampling, each agent samples a seed independently and constructs an MDP by using this seed and the mapping detailed in Section 3.2.2 or Section 3.2.3 respectively. The fact that each agent samples a seed independently leads, thanks to randomness, to MDPs with very different edge weights for the edges that have not been traversed yet. As a result, agents pursue diverse paths. At the same time, maintaining a constant seed ensures that each agent adjusts his beliefs in subsequent time periods in a manner that is consistent with the observations made by all agents and not driven by further randomness that would be distracting to the agent’s exploration.

Consider the specification of the problem in which we sample Erdős-Rényi graphs with number of vertices and edge probability . The edge weights are independent and the common prior of the agents on the edge weights is , . When edge is traversed, the observed reward is distributed according to . The horizon (i.e. length of the maximum reward path to be found) is .

In Figure 5, we show the performance of all algorithms. The results are averaged over hundreds of simulations. Standard-Gaussian seed sampling and martingalean-Gaussian seed sampling achieve the lowest cumulative regret, as they adhere to all the properties of coordinated exploration. Concurrent UCRL follows with 49.9% higher cumulative regret than the seed sampling algorithms. Concurrent UCRL does not satisfy the diversity property, and incurs much larger cumulative regret than the seed sampling algorithms, but does better than Thompson resampling because, unlike the latter, it satisfies the commitment property which is essential in deep exploration. Thompson resampling has 191% higher cumulative regret than the seed sampling algorithms.

Figure 5: Performance of the algorithms of Section 3 and of the Greedy algorithm in the “Maximum Reward Path” example with nodes, edge probability, , , horizon in terms of cumulative regret of 10,000 concurrent agents ordered in ascending activation time.

5 Closing Remarks

Concurrent reinforcement learning is poised to play an important role across many applications, ranging from web services to autonomous vehicles to healthcare, where each agent is responsible for a user, vehicle or patient. To learn efficiently in such settings, agents should coordinate the exploratory effort. We presented three properties that are essential to efficient coordinated exploration: real-time adaptivity to shared observations, commitment to carry through with action sequences that reveal new information, and diversity

across learning opportunities pursued by different agents. We demonstrated that optimism-based approaches fall short with respect to diversity, while naive extensions of Thompson sampling lack the requisite level of commitment. We proposed

seed sampling, a novel extension of PSRL that does satisfy these properties. We presented several seed sampling schemes, customized for particular priors and likelihoods, but the seed sampling concept transcends these specific cases, offering a general approach to more broadly designing effective coordination algorithms for concurrent reinforcement learning. Much work remains to be done on this topic. For starters, it would be useful to develop a mathematical theory that sharpens understanding of the efficiency and robustness of seeding schemes. Beyond that, work is required to develop seeding schemes that operate in conjunction with practical scalable reinforcement learning algorithms that approximate optimal value functions and policies for problems with intractable state and action spaces.

References

  • Gentle (2009) James E. Gentle. Computational Statistics. Springer, 2009.
  • Gentle (2013) James E. Gentle. Random Number Generation and Monte Carlo Methods. Springer, 2013.
  • Guo and Brunskill (2015) Z. Guo and E. Brunskill. Concurrent PAC RL. In

    AAAI Conference on Artificial Intelligence

    , pages 2624–2630, 2015.
  • Jaksch et al. (2010) Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning.

    Journal of Machine Learning Research

    , 11:1563–1600, 2010.
  • Kearns and Singh (2002) Michael J. Kearns and Satinder P. Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209–232, 2002.
  • Kim (2017) Michael Jong Kim. Thompson sampling for stochastic control: The finite parameter case. IEEE Transactions on Automatic Control, 2017.
  • Lu and Van Roy (2017) Xiuyuan Lu and Benjamin Van Roy. Ensemble sampling. In NIPS, 2017.
  • Osband and Van Roy (2014a) Ian Osband and Benjamin Van Roy. Model-based reinforcement learning and the eluder dimension. In Advances in Neural Information Processing Systems, pages 1466–1474, 2014a.
  • Osband and Van Roy (2014b) Ian Osband and Benjamin Van Roy. Near-optimal reinforcement learning in factored MDPs. In Advances in Neural Information Processing Systems, pages 604–612, 2014b.
  • Osband and Van Roy (2017a) Ian Osband and Benjamin Van Roy. On optimistic versus randomized exploration in reinforcement learning. In The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, 2017a.
  • Osband and Van Roy (2017b) Ian Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforcement learning. In ICML, 2017b.
  • Osband et al. (2013) Ian Osband, Daniel Russo, and Benjamin Van Roy. (More) efficient reinforcement learning via posterior sampling. In NIPS, pages 3003–3011. Curran Associates, Inc., 2013.
  • Pazis and Parr (2013) Jason Pazis and Ronald Parr. PAC optimal exploration in continuous space Markov decision processes. In AAAI. Citeseer, 2013.
  • Pazis and Parr (2016) Jason Pazis and Ronald Parr. Efficient pac-optimal exploration in concurrent, continuous state mdps with delayed updates. In AAAI. Citeseer, 2016.
  • Russo et al. (2017) Daniel Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. A tutorial on Thompson sampling. arXiv preprint arXiv:1707.02038, 2017.
  • Silver et al. (2013) D. Silver, Barker Newnham, L, S. Weller, and J. McFall. Concurrent reinforcement learning from customer interactions. In Proceedings of The 30th International Conference on Machine Learning, pages 924–932, 2013.
  • Strens (2000) Malcolm J. A. Strens. A Bayesian framework for reinforcement learning. In ICML, pages 943–950, 2000.