Multi-Agent Thompson Sampling for Bandit Applications with Sparse Neighbourhood Structures

11/22/2019 ∙ by Timothy Verstraeten, et al. ∙ 0

Multi-agent coordination is prevalent in many real-world applications. However, such coordination is challenging due to its combinatorial nature. An important observation in this regard is that agents in the real world often only directly affect a limited set of neighbouring agents. Leveraging such loose couplings among agents is key to making coordination in multi-agent systems feasible. In this work, we focus on learning to coordinate. Specifically, we consider the multi-agent multi-armed bandit framework, in which fully cooperative loosely-coupled agents must learn to coordinate their decisions to optimize a common objective. We propose multi-agent Thompson sampling (MATS), a new Bayesian exploration-exploitation algorithm that leverages loose couplings. We provide a regret bound that is sublinear in time and low-order polynomial in the highest number of actions of a single agent for sparse coordination graphs. Additionally, we empirically show that MATS outperforms the state-of-the-art algorithm, MAUCE, on two synthetic benchmarks, and a novel benchmark with Poisson distributions. An example of a loosely-coupled multi-agent system is a wind farm. Coordination within the wind farm is necessary to maximize power production. As upstream wind turbines only affect nearby downstream turbines, we can use MATS to efficiently learn the optimal control mechanism for the farm. To demonstrate the benefits of our method toward applications we apply MATS to a realistic wind farm control task. In this task, wind turbines must coordinate their alignments with respect to the incoming wind vector in order to optimize power production. Our results show that MATS improves significantly upon state-of-the-art coordination methods in terms of performance, demonstrating the value of using MATS in practical applications with sparse neighbourhood structures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Multi-agent decision coordination is prevalent in many real-world applications, such as traffic light control [39], warehouse commissioning [11] and wind farm control [14]. Often, such settings can be formulated as coordination problems in which agents have to cooperate in order to optimize a shared team reward [5].

Handling multi-agent settings is challenging, as the size of the joint action space scales exponentially with the number of agents in the system. Therefore, an approach that directly considers all agents’ actions jointly is computationally intractable. This has made such coordination problems the central focus in the planning literature [22, 15, 16, 17]. Fortunately, in real-world settings agents often only directly affect a limited set of neighbouring agents. This means that the global reward received by all agents can be decomposed into local components that only depend on small subsets of agents. Exploiting such loose couplings is key in order to keep multi-agent decision problems tractable [9].

In this work, we consider learning to coordinate in multi-agent systems. For example, consider a wind farm control task, which is comprised of a set of wind turbines, and we aim to maximize the farm’s total productivity. When upstream turbines directly face the incoming wind stream, energy is extracted from wind. This reduces the productivity of downstream turbines, potentially damaging the overall power production. However, turbines have the option to rotate, in order to deflect the turbulent flow away from turbines downwind [35]. Due to the complex nature of the aerodynamic interactions between the turbines, constructing a model of the environment and deriving a control policy using planning techniques is extremely challenging [27]. Instead, a joint control policy among the turbines can be learned to effectively maximize the productivity of the wind farm. The system is loosely coupled, as redirection only directly affects adjacent turbines.

While most of the literature only considers approximate reinforcement learning methods for learning in multi-agent systems, it has recently been shown

[4] that it is possible to achieve theoretical bounds on the regret (i.e., how much reward is lost due to learning). In this work, we use the multi-agent multi-armed bandit problem definition, and improve upon the state of the art. Specifically, we propose the multi-agent Thompson sampling (MATS) algorithm, which exploits loosely-coupled interactions in multi-agent systems. The loose couplings are formalized as a coordination graph, which defines for each pair of agents whether their actions depend on each other. We assume the graph structure is known beforehand, which is the case in many real-world applications with sparse agent interactions (e.g., wind farm control).

Our method leverages the exploration-exploitation mechanism of Thompson sampling (TS). TS has been shown to be highly competitive to other popular methods, e.g., UCB [8]. Recently, theoretical guarantees on its regret have been established [1], which renders the method increasingly popular in the literature. Additionally, due to its Bayesian nature, problem-specific priors can be specified. We argue that this has strong relevance in many practical fields, such as advertisement selection [8] and influenza mitigation [25, 24].

We provide a finite-time Bayesian regret analysis and prove that the upper regret bound of MATS is low-order polynomial in the number of actions of a single agent for sparse coordination graphs (Corollary 1). This is a significant improvement over the exponential bound of classic TS, which is obtained when the coordination graph is ignored [1]. We show that MATS improves upon the state of the art in various synthetic settings. Finally, we demonstrate that MATS achieves high performance on a realistic wind farm control task, in which multiple wind turbines have to be jointly aligned to maximize the total power production.

Problem statement

In this work, we adopt the multi-agent multi-armed bandit (MAMAB) setting [4, 32]. A MAMAB is similar to the multi-armed bandit formalism [34], but considers multiple agents factored into groups. When the agents have pulled a joint arm, each group receives a reward. The goal shared by all agents is to maximize the total sum of rewards. Formally,

Definition 1.

A multi-agent multi-armed bandit (MAMAB) is a tuple where

  • is the set of enumerated agents. This set is factorized into , possibly overlapping, subsets of agents .

  • is the set of joint actions, or joint arms, which is the Cartesian product of the sets of actions for each of the agents in . We denote as the set of local joint actions, or local arms, for the group .

  • is a stochastic function providing a global reward when a joint arm, , is pulled. The global reward function is decomposed into noisy, observable and independent local reward functions, i.e., . A local function only depends on the local arm of the subset of agents in .

We denote the mean reward of a joint arm as . For simplicity, we refer to the agent by its index .

The dependencies between the local reward functions and the agents are described as a coordination graph [16].

Definition 2.

A coordination graph is a bipartite graph , whose nodes are agents and components of a factored reward function , and an edge exists if and only if agent influences component .

The dependencies in a MAMAB can be described by setting .

In this setting, the objective is to minimize the expected cumulative regret [2], which is the cost incurred when pulling a particular joint arm instead of the optimal one.

Definition 3.

The expected cumulative regret of pulling a sequence of joint arms until time step according to policy is

(1)

with

(2)

where is the optimal joint arm and is the joint arm pulled at time . For the sake of brevity, we will omit when the context is clear.

Cumulative regret can be minimized by using a policy that considers the full joint arm space, thereby ignoring loose couplings between agents. This leads to a combinatorial problem, as the joint arm space scales exponentially with the number of agents. Therefore, loose couplings need to be taken into account whenever possible.

Multi-agent Thompson sampling

We propose the multi-agent Thompson sampling (MATS) algorithm for decision making in loosely-coupled multi-agent multi-armed bandit problems. Consider a MAMAB with groups (Definition 1). The local means are treated as unknown. According to the Bayesian formalism, we exert our beliefs over the local means in the form of a prior, . At each time step , MATS draws a sample from the posterior for each group and local arm given the history, , consisting of local actions and rewards associated with past pulls:

(3)

Note that during this step, MATS samples directly the posterior over the unknown local means, which implies that the sample and the unknown mean are independent and identically distributed at time step .

Thompson sampling (TS) chooses the arm with the highest sample, i.e.,

(4)

However, in our case, the expected reward is decomposed into several local means. As conflicts between overlapping groups will arise, the optimal local arms for an agent in two groups may differ. Therefore, we must define the argmax-operator to deal with the factored representation of a MAMAB, while still returning the full joint arm that maximizes the sum of samples, i.e.,

(5)

To this end, we use variable elimination (VE), which computes the joint arm that maximizes the global reward without explicitly enumerating over the full joint arm space [16]. Specifically, VE consecutively eliminates an agent from the coordination graph, while computing its best response with respect to its neighbours. VE is guaranteed to return the optimal joint arm and has a computational complexity that is combinatorial in terms of the induced width of the graph, i.e., the number of neighbours of an agent at the time of its elimination. However, as the method is typically applied to a loosely-coupled coordination graph, the induced width is generally much smaller than the size of the full joint action space, which renders the maximization problem tractable [16, 17]. Approximate efficient alternatives exist, such as max-plus [38], but using them will invalidate the proof for the Bayesian regret bound (Theorem 1).

Finally, the joint arm that maximizes Equation 5, , is pulled and a reward will be obtained for each group. MATS is formally described in Algorithm 1.

Data: Prior per group and local action
for  do
      
         
       using VE
       Pull joint arm
      
      
end for
Algorithm 1 MATS

MATS belongs to the class of probability matching methods

[23].

Definition 4.

Given history

, the probability distribution of the pulled arm

is equal to the probability distribution of the optimal arm . Formally,

(6)

Intuitively, MATS samples the local mean rewards according to the beliefs of the user at each time step, and maximizes over those means to find the optimal joint arm according to Definition 1. This process is conceptually similar to traditional TS [34].

Bayesian regret analysis

Many multi-agent systems are composed of locally connected agents. When formalized as a MAMAB (Definition 1), our method is able to exploit these local structures during the decision process. We provide a regret bound for MATS that scales sublinearly with a factor , where is the number of local arms.

Consider a MAMAB with groups and the following assumption on the rewards:

Assumption 1.

The global rewards have a mean between 0 and 1, i.e.,

Assumption 2.

The local rewards shifted by their mean are -subgaussian distributed, i.e., ,

We maintain the pull counters

and estimated means

for local arms .

Consider the event , which states that, until time step , the differences between the local sample means and true means are bounded by a time-dependent threshold, i.e.,

(7)

with

(8)

where is a free parameter that will be chosen later. We denote the complement of the event by .

Lemma 1.

(Concentration inequality) The probability of exceeding the error bound on the local sample means is linearly bounded by . Specifically,

(9)
Proof.

Using the union bound (U), we can bound the probability of observing event as

(10)

The estimated mean is a weighted sum of random variables distributed according to a -subgaussian with mean . Hence, Hoeffding’s inequality (H) is applicable [36].

(11)

Therefore, the following concentration inequality on holds:

(12)

Lemma 2.

(Bayesian regret bound under ) Provided that the error bound on the local sample means is never exceeded until time , the Bayesian regret bound, when using the MATS policy , is of the order

(13)
Proof.

Consider this upper bound on the sample means:

(14)

Given history , the statistics and are known, rendering a deterministic function. Therefore, the probability matching property of MATS (Equation 6) can be applied as follows:

(15)

Hence, using the tower-rule (T), the regret can be bounded as

(16)

Note that the expression is always negative under , i.e.,

(17)

while is bounded by twice the threshold , i.e.,

(18)

Thus, Equation LABEL:eq:bound_regret_t can be bounded as

(19)

where is the indicator function. The terms in the summation are only non-zero at the time steps when the local action is pulled, i.e., when . Additionally, note that only at these time steps, the counter increases by exactly 1. Therefore, the following equality holds:

(20)

The function is decreasing and integrable. Hence, using the right Riemann sum,

(21)

Combining Equations 19-21 leads to a bound

(22)

We use the relationship between the 1- and 2-norm of a vector , where is the number of elements in the vector, as follows:

(23)

Finally, note that the sum of all counts is equal to the total number of local pulls done by MATS until time , i.e.,

(24)

Using the Equations LABEL:eq:bound_ct_sum_full-24, the complete regret bound under is given by

(25)

Theorem 1.

Let be a MAMAB. If Assumptions 1 and 2 hold, then the MATS policy satisfies a Bayesian regret bound of

(26)
Proof.

Using the law of excluded middle (M) and the fact that and are between 0 and 1 (B), the regret can be decomposed as

(27)

Then, according to Lemmas 1 and 2 (L), we have

(28)

Finally, choosing , we conclude that

(29)

Corollary 1.

If for all agents , and if for all groups , then

(30)
Proof.

. ∎

Corollary 1 tells us that the regret is sub-linear in terms of time and low-order polynomial in terms of the largest action space of a single agent when the number of groups and agents per group are small. This reflects the main contribution of this work. When agents are loosely coupled, the effective joint arm space is significantly reduced, and MATS provides a mechanism that efficiently deals with such settings. This is a significant improvement over the established classic regret bounds of vanilla TS when the MAMAB is ‘flattened’ and the factored structure is neglected [30, 23]. The classic bounds scale exponentially with the number of agents, which renders the use of vanilla TS unfeasible in many multi-agent environments.

Experiments

We evaluate the performance of MATS on the benchmark problems proposed in the paper that introduced MAUCE [4], which is the current state-of-the-art algorithm for multi-agent bandit problems, and one novel setting that falls outside the domain of the theoretical guarantees for both MAUCE and MATS. First, we evaluate the performance of MATS on two benchmarks that were introduced in the MAUCE paper, i.e., Bernoulli 0101-Chain and Gem Mining. We compare against a random policy (rnd), Sparse Cooperative Q-Learning (SCQL) [20] and the state-of-the-art algorithm, MAUCE [4]. For SCQL and MAUCE, we use the same exploration parameters as in previous work [4]. For MATS, we always use non-informative Jeffreys priors, which are invariant toward reparametrization of the experimental settings [29]. Although including additional prior domain knowledge could be useful in practice, we use well-known non-informative priors in our experiments to compare fairly with other state-of-the-art techniques. Then, we introduce a novel variant of the 0101-Chain with Poisson-distributed local rewards. A Poisson distribution is supergaussian, meaning that its tails tend slower towards zero than the tails of any Gaussian. Therefore, both the assumptions made in Theorem 1

and in the established regret bound of MAUCE are violated. Additionally, as the rewards are highly skewed, we expect that the use of symmetric exploration bounds in MAUCE will often lead to either over- or underexploration of the local arms. We assess the performance of both methods on this benchmark.

Bernoulli 0101-Chain

The Bernoulli 0101-Chain consists of agents and local reward distributions. Each agent can choose between two actions: 0 and 1. In the coordination graph, agents and are connected to a local reward

. Thus, each pair of agents should locally coordinate in order to find the best joint arm. The local rewards are drawn from a Bernoulli distribution with a different success probability per group. These success probabilities are given in Table 

1. The optimal joint action is an alternating sequence of zeros and ones, starting with 0.

Table 1: Bernouilli 0101 Chain – The unscaled local reward distributions of agents and , where is even. Each entry shows the success probability for each local arm of agents and , where is even. The table is transposed for the case where

is odd.

To ensure that the assumptions made in the regret analyses of MAUCE and MATS hold, we divide the local rewards by the number of groups, such that the global rewards are between 0 and 1.

(a) Bernoulli 0101-Chain
(b) Gem Mining
(c) Poisson 0101-Chain
Figure 1:

Cumulative normalized regret averaged over 100 runs for the (a) Bernoulli 0101-Chain, (b) Gem Mining and (d) Poisson 0101-Chain, and over 10 runs for the (c) Wind Farm. Both the mean (line) and standard deviation (shaded area) are plotted.

We provide non-informative Jeffreys priors on the unknown means to MATS, which for the Bernoulli likelihood is a Beta prior, [26]. The results for the Bernoulli 0101-chains are shown in Figure 1(a).

Gem Mining

Figure 2: Example of a coordination graph in the Gem Mining problem. The red nodes are the mines (rewards), while the blue nodes are the villages (agents).

In the Gem Mining problem, a mining company wants to excavate a set of mines for gems (i.e., local rewards). The goal is to maximize the total number of gems found over all mines. However, the company’s workers live in separate villages (i.e., agents), and only one van per village is available. Therefore, each village needs to decide to which mine it should send its workers (i.e., local action). Moreover, workers can only commute to nearby mines (i.e., coordination graph). Hence, a group can be constructed per mine, consisting of all agents that can travel toward the mine. An example of a coordination graph is given in Figure 2

The reward is drawn from a Bernoulli distribution, where the probability of finding a gem at a mine is with the number of workers at the mine and a base probability that is sampled uniformly random from the interval for each mine. When more workers are excavating a mine, the probability of finding a gem increases. Each village is populated by a number sampled uniformly random from . The coordination graph is generated by sampling for each village a number of mines in to which it should be connected. Then, each village is connected to the mines to . The last village is always connected to 4 mines.

We provide non-informative Jeffreys priors on the unknown means to MATS, which for the Bernoulli likelihood is a Beta prior, [26]. The results for the Gem Mining problem are shown in Figure 1(b).

Poisson 0101-Chain

We introduce a novel benchmark with Poisson distributed local rewards, for which the established regret bounds of MATS and MAUCE do not hold. Similar to the Bernoulli 0101-Chain, agents need to coordinate their actions in order to obtain an alternating sequence of zeroes and ones. However, as the rewards are highly skewed and supergaussian, this setting is much more challenging. The means of the Poisson distributions are given in Table 2. We also divide the rewards by the number of groups, similar to the Bernoulli 0101-Chain.

Table 2: Poisson 0101 Chain – The unscaled local reward distributions of agents and . Each entry shows the mean for each local arm of agents and .

For MAUCE, an exploration parameter must be chosen. This exploration parameter denotes the range of the observed rewards. As a Poisson distribution has unbounded support, we rely on percentiles of the reward distribution. Specifically, as 95 of the rewards when pulling the optimal arm falls below , we choose as the exploration parameter of MAUCE. For MATS we use non-informative Jeffreys priors on the unknown means, which for the Poisson likelihood is a Gamma prior, [26]. The results are shown in Figure 1(c).

Wind farm control application

We demonstrate the benefits of MATS on a state-of-the-art wind farm simulator and compare its performance to MAUCE and SCQL. A wind farm consists of a group of wind turbines, instantiated to extract energy from wind. From the perspective of a single turbine, aligning with the incoming wind vector usually ensures the highest productivity. However, translating this control policy directly towards an entire wind farm may be sub-optimal. As wind passes through the farm, downstream turbines observe a significantly lower wind speed. This is known as the wake effect, which is due to the turbulence generated behind operational turbines.

In recent work, the possibility of deflecting wake away from the farm through rotor misalignment is investigated [35]. While a misaligned turbine produces less energy on its own, the group’s total productivity is increased. Physically, the wake effect reduces over long distances, and thus, turbines tend to only influence their neighbours. We can use this domain knowledge to define groups of agents and organize them in a graph structure. Note that the graph structure depends on the incoming wind vector. Nevertheless, atmospheric conditions are typically discretized when analyzing operational regimes [19], thus, a graph structure can be made independently for each possible incoming discretized wind vector. We construct a graph structure for one possible wind vector.

(a) Dependency graph
(b) Wind Farm
Figure 3: Wind farm layout – Dependency graph where the nodes are the turbines and the edges describe the dependencies between the turbines. The incoming wind is denoted by an arrow.

We demonstrate our method on a virtual wind farm, consisting of 11 turbines, of which the layout is shown in Figure 3(a). We use the state-of-the-art WISDEM FLORIS simulator [28]

. For MATS, we assume the local power productions are sampled from Gaussians with unknown mean and variance, which leads to a Student’s t-distribution on the mean when using a Jeffreys prior

[18]. The results for the wind farm control setting are shown in Figure 3(b).

Discussion

MATS is a Bayesian method, which means that it can leverage prior knowledge about the data distribution. This property is highly beneficial in many practical applications, e.g., influenza mitigation [25] and wind farm control [37, 24].

Both MAUCE and MATS achieve sub-linear regret in terms of time and low-order polynomial regret in terms of the number of local arms for sparse coordination graphs. However, empirically, MATS consistently outperforms MAUCE as well as SCQL. We can see that MATS solves the Bernoulli 0101-Chain problem in only a few time steps, while MAUCE still pulls many sub-optimal actions after 10000 time steps (see Figure 1(a)). In the more challenging Gem Mining problem, the cumulative regret of MAUCE is three times as high as the cumulative regret of MATS around 40000 time steps (see Figure 1(b)). In the wind farm control task, we can see that MATS allowed for a five-fold increase of the normalized power productions with respect to the state of the art (see Figure 3(b)). We argue that the high performance of MATS is due to the ability to seamlessly include domain knowledge about the shape of the reward distributions and treat the problem parameters as unknowns. To highlight the power of this property, we introduced the Poisson 0101-chain. In this setting, the reward distributions are highly skewed, for which the mean does not match the median. Therefore, in our case, since the mean falls well above 50% of all samples, it is expected that for the initially observed rewards, the true mean will be higher than the sample mean. Naturally, this bias averages out in the limit, but may have a large impact during the early exploration stage. The high standard deviations in Figure 1(c) support this impact. Although the established regret bounds of MATS and MAUCE do not apply for supergaussian reward distributions, we demonstrate that MATS exploits density information of the rewards to achieve more targeted exploration. In Figure 1(c), the cumulative regret of MATS stagnates around 7500 time steps, while the cumulative regret of MAUCE continues to increase significantly. As MAUCE only supports symmetric exploration bounds, it is challenging to correctly assess the amount of exploration needed to solve the task.

Throughout the experiments, exploration constants had to be specified for MAUCE, which were challenging to choose and interpret in terms of the density of the data. In contrast, MATS uses either statistics about the data (if available) or, potentially non-informative, beliefs defined by the user. For example, in the wind farm case, the spread of the data is unknown. MATS effectively maintains a posterior on the variance and uses it to balance exploration and exploitation, while still outperforming MAUCE with a manually calibrated exploration range (see Figure 3(b)).

Related work

Multi-agent reinforcement learning and planning with loose couplings has been investigated in sequential decision problems [17, 21, 12, 31]. In sequential settings, the value function cannot be factorized exactly. Therefore, it is challenging to provide convergence and optimality guarantees. While for planning some theoretical guarantees can be provided [31], in the learning literature the focus has been on empirical validation [21]. In this work, we focus on MAMABs, which are single-shot stateless problems. In such settings, the reward function is factored exactly into components that only depend on a subset of agents.

The combinatorial bandit [6, 7, 13, 10] is a variant of the multi-armed bandit, in which, rather than one-dimensional arms, an arm vector has to be pulled. In our work, the arms’ dimensionality corresponds to the number of agents in our system, and similarly to combinatorial bandits, the number of arms exponentially increases with this quantity. We consider a variant of this framework, called the semi-bandit problem [3], in which local components of the global reward are observable. Chen et. al (2013) constructed an algorithm for this setting that assumes access to an -oracle, which provides a joint action that outputs a fraction of the optimal expected reward with probability . Instead, we assume the availability of a coordination graph, which we argue is a reasonable assumption in many multi-agent settings.

Sparse cooperative Q-learning is an algorithm that also assumes the availability of a coordination graph [20]. However, although strong experimental results are given, no theoretical guarantees were provided. Later, the UCB-like algorithm, HEIST, for exploration and exploitation in MAMABs was introduced [32], which uses a message-passing scheme for resolving coordination graphs. They provide some theoretical guarantees on the regret for problems with acyclic coordination graphs. Multi-Agent Upper-Confidence Exploration (MAUCE) [4] is a more general method that uses variable elimination to resolve (potentially cyclic) coordination graphs. MAUCE demonstrates high performance on a variety of benchmarks and provides a tight theoretical upper bound on the regret. MATS provides a Bayesian alternative to MAUCE based on Thompson sampling (TS).

Our problem definition is related to distributed constraint optimization (DCOP) problems [40]. In DCOP problems, multiple agents control a set of variables in a distributed manner under a set of constraints. The objective is the same as for a MAMAB, i.e., optimize the sum over group rewards. However, in DCOPs, the rewards are assumed to be known beforehand. The Distributed Coordination of Exploration and Exploitation (DCEE) framework [33] extends this setting to unknown rewards, but considers the optimization of the cumulative reward achieved over a time span, rather than of a single-step reward. MAMABs, or MAB-DCOPs [32], consider the optimization of a single-step expected reward over time.

In recent research on wind farm control, the impact of optimized rotor alignments on power production is heavily investigated [35]. To search for the optimal alignments within the wind farm, data-driven methods are usually adopted, where the turbines’ alignments are perturbed iteratively until they locally converge [27]. When optimizing the alignment of a wind turbine, only considering its neighbours can significantly boost the learning speed [14]. MATS is also able to leverage neighbourhood structures. In addition, rather than random perturbation of the alignments, MATS leverages an exploration-exploitation mechanism that is inspired by TS and variable elimination, which allows for a global exploration mechanism that targets the optimal alignment configuration, while retaining a small regret during the learning process itself.

Conclusions

We proposed multi-agent Thompson sampling (MATS), a novel Bayesian algorithm for multi-agent multi-armed bandits. The method exploits loose connections between agents to solve multi-agent coordination tasks efficiently. Specifically, we proved that, for -subgaussian rewards with bounded means, the expected cumulative regret decreases sub-linearly in time and low-order polynomially in the highest number of actions of a single agent when the coordination graph is sparse. Empirically, we showed a significant improvement over the state-of-the-art algorithm, MAUCE, on several synthetic benchmarks. Additionally, we showed that MATS can seamlessly be adapted to the available prior knowledge, and achieves state-of-the-art performance on the Poisson 0101-Chain, a new benchmark with supergaussian rewards. Finally, we demonstrated that MATS achieves high performance on a realistic wind farm control task, where the optimal rotor alignments of the wind turbines need to be jointly optimized to maximize the farm’s power production. In many practical applications, there exist sparse neighbourhood structures between agents, and we have shown that MATS is able to successfully exploit these structures, while leveraging prior knowledge about the data.

References

  • [1] S. Agrawal and N. Goyal (2012) Analysis of thompson sampling for the multi-armed bandit problem. In Conference on Learning Theory, pp. 39–1. Cited by: Introduction, Introduction.
  • [2] S. Agrawal and N. Goyal (2013) Further optimal regret bounds for thompson sampling. In Artificial intelligence and statistics, pp. 99–107. Cited by: Problem statement.
  • [3] J.-Y. Audibert, S. Bubeck, and G. Lugosi (2011) Minimax policies for combinatorial prediction games.. In COLT, Vol. 19, pp. 107–132. Cited by: Related work.
  • [4] E. Bargiacchi, T. Verstraeten, D. M. Roijers, A. Nowé, and H. Hasselt (2018) Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems. In

    International Conference on Machine Learning

    ,
    pp. 491–499. Cited by: Introduction, Problem statement, Experiments, Related work.
  • [5] C. Boutilier (1996) Planning, learning and coordination in multiagent decision processes. In TARK 1996: Proceedings of the 6th conference on Theoretical aspects of rationality and knowledge, pp. 195–210. Cited by: Introduction.
  • [6] S. Bubeck and N. Cesa-Bianchi (2012) Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning 5 (1), pp. 1–122. Cited by: Related work.
  • [7] N. Cesa-Bianchi and G. Lugosi (2012) Combinatorial bandits. Journal of Computer and System Sciences 78 (5), pp. 1404–1422. Cited by: Related work.
  • [8] O. Chapelle and L. Li (2011) An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pp. 2249–2257. Cited by: Introduction.
  • [9] A. C. Chapman, D. S. Leslie, A. Rogers, and N. R. Jennings (2013) Convergent learning algorithms for unknown reward games. SIAM Journal on Control and Optimization 51 (4), pp. 3154–3180. Cited by: Introduction.
  • [10] W. Chen, Y. Wang, and Y. Yuan (2013) Combinatorial multi-armed bandit: general framework, results and applications. In Proceedings of the 30th international conference on machine learning, pp. 151–159. Cited by: Related work.
  • [11] D. Claes, F. Oliehoek, H. Baier, and K. Tuyls (2017) Decentralised online planning for multi-robot warehouse commissioning. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pp. 492–500. Cited by: Introduction.
  • [12] Y.-M. De Hauwere, P. Vrancx, and A. Nowé (2010) Learning multi-agent state space representations. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’10, pp. 715–722. Cited by: Related work.
  • [13] Y. Gai, B. Krishnamachari, and R. Jain (2012) Combinatorial network optimization with unknown variables: multi-armed bandits with linear rewards and individual observations. IEEE/ACM Transactions on Networking (TON) 20 (5), pp. 1466–1478. Cited by: Related work.
  • [14] P. M. Gebraad and J. van Wingerden (2015) Maximum power-point tracking control for wind farms. Wind Energy 18 (3), pp. 429–447. Cited by: Introduction, Related work.
  • [15] C.E. Guestrin, D. Koller, and R. Parr (2001) Max-norm projections for factored MDPs. In Proc. of the 17th International Joint Conference on Artificial Intelligence (IJCAI), pp. 673–682. Cited by: Introduction.
  • [16] C. Guestrin, D. Koller, and R. Parr (2002) Multiagent planning with factored mdps. In Advances in neural information processing systems, pp. 1523–1530. Cited by: Introduction, Problem statement, Multi-agent Thompson sampling.
  • [17] C. Guestrin, S. Venkataraman, and D. Koller (2002) Context-specific multiagent coordination and planning with factored mdps. In AAAI/IAAI, pp. 253–259. Cited by: Introduction, Multi-agent Thompson sampling, Related work.
  • [18] J. Honda and A. Takemura (2014) Optimality of thompson sampling for gaussian bandits depends on priors. In Artificial Intelligence and Statistics, pp. 375–383. Cited by: Wind farm control application.
  • [19] International Electrotechnical Commission (2012) Wind turbines – Part 4: Design requirements for wind turbine gearboxes (No. IEC 61400-4). Note: accessed 6 March 2019 External Links: Link Cited by: Wind farm control application.
  • [20] J. R. Kok and N. Vlassis (2004) Sparse cooperative q-learning. In Proceedings of the Twenty-first International Conference on Machine Learning, ICML ’04, New York, NY, USA. External Links: Document Cited by: Experiments, Related work.
  • [21] J. Kok and N. Vlassis (2006) Using the max-plus algorithm for multiagent decision making in coordination graphs. In RoboCup 2005: Robot Soccer World Cup IX, A. Bredenfeld, A. Jacoff, I. Noda, and Y. Takahashi (Eds.), Lecture Notes in Computer Science, Vol. 4020, pp. 1–12. Cited by: Related work.
  • [22] D. Koller and R. Parr (2000) Policy iteration for factored MDPs. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI), San Francisco, CA, USA, pp. 326–334. Cited by: Introduction.
  • [23] T. Lattimore and C. Szepesvári (2018) Bandit algorithms. preprint. Cited by: Multi-agent Thompson sampling, Bayesian regret analysis.
  • [24] P. Libin, T. Verstraeten, D. M. Roijers, W. Wang, K. Theys, and A. Nowé (2019) Thompson sampling for m-top exploration. In Proceedings of the IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), pp. 1414–1420. Cited by: Introduction, Discussion.
  • [25] P. J. Libin, T. Verstraeten, D. M. Roijers, J. Grujic, K. Theys, P. Lemey, and A. Nowé (2018) Bayesian best-arm identification for selecting influenza mitigation strategies. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 456–471. Cited by: Introduction, Discussion.
  • [26] D. Lunn, C. Jackson, N. Best, D. Spiegelhalter, and A. Thomas (2012) The bugs book: a practical introduction to bayesian analysis. Chapman and Hall/CRC. Cited by: Bernoulli 0101-Chain, Gem Mining, Poisson 0101-Chain.
  • [27] J. R. Marden, S. D. Ruben, and L. Y. Pao (2013) A model-free approach to wind farm control using game theoretic methods. IEEE Transactions on Control Systems Technology 21 (4), pp. 1207–1214. Cited by: Introduction, Related work.
  • [28] NREL (2019) FLORIS. Version 1.0.0. GitHub. External Links: Link Cited by: Wind farm control application.
  • [29] C. Robert (2007) The bayesian choice: from decision-theoretic foundations to computational implementation. Springer Science & Business Media. Cited by: Experiments.
  • [30] D. Russo and B. Van Roy (2014) Learning to optimize via posterior sampling. Mathematics of Operations Research 39 (4), pp. 1221–1243. Cited by: Bayesian regret analysis.
  • [31] J. Scharpff, D. M. Roijers, F. A. Oliehoek, M. T. Spaan, and M. M. de Weerdt (2016) Solving transition-independent multi-agent MDPs with sparse interactions. In AAAI 2016: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Cited by: Related work.
  • [32] R. Stranders, L. Tran-Thanh, F. M. D. Fave, A. Rogers, and N. R. Jennings (2012) DCOPs and bandits: Exploration and exploitation in decentralised coordination. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 289–296. Cited by: Problem statement, Related work, Related work.
  • [33] M. E. Taylor, M. Jain, P. Tandon, M. Yokoo, and M. Tambe (2011) Distributed on-line multi-agent optimization under uncertainty: balancing exploration and exploitation. Advances in Complex Systems 14 (03), pp. 471–528. Cited by: Related work.
  • [34] W. R. Thompson (1933) On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika 25 (3/4), pp. 285–294. Cited by: Problem statement, Multi-agent Thompson sampling.
  • [35] M. T. Van Dijk, J. W. Wingerden, T. Ashuri, Y. Li, and M. Rotea (2016) Yaw-misalignment and its impact on wind turbine loads and wind farm power output. Journal of Physics: Conference Series 753 (6). Cited by: Introduction, Wind farm control application, Related work.
  • [36] R. Vershynin (2018)

    High-dimensional probability: an introduction with applications in data science

    .
    Vol. 47, Cambridge University Press. Cited by: Bayesian regret analysis.
  • [37] T. Verstraeten, A. Nowe, J. Keller, Y. Guo, S. Sheng, and J. Helsen (2019) Fleetwide data-enabled reliability improvement of wind turbines. Renewable and Sustainable Energy Reviews 109, pp. 428–437. Cited by: Discussion.
  • [38] N. Vlassis, R. Elhorst, and J. R. Kok (2004) Anytime algorithms for multiagent decision making using coordination graphs. In IEEE International Conference on Systems, Man and Cybernetics, Vol. 1, pp. 953–957. Cited by: Multi-agent Thompson sampling.
  • [39] M.A. Wiering (2000) Multi-agent reinforcement learning for traffic light control. In Machine Learning: Proceedings of the Seventeenth International Conference (ICML’2000), pp. 1151–1158. Cited by: Introduction.
  • [40] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara (1998) The distributed constraint satisfaction problem: formalization and algorithms. IEEE Transactions on knowledge and data engineering 10 (5), pp. 673–685. Cited by: Related work.

Acknowledgments

The authors would like to acknowledge FWO (Fonds Wetenschappelijk Onderzoek) for their support through the SB grants of Timothy Verstraeten (#1S47617N), Eugenio Bargiacchi (#1SA2820N) and Pieter JK Libin (#1S31916N). Diederik M Roijers was a Postdoctoral Fellow with the FWO (grant #12J0617N). This research was supported by funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme and under the VLAIO Supersized 4.0 ICON project.