No-Regret Learning in Unknown Games with Correlated Payoffs

09/18/2019
by   Pier Giuseppe Sessa, et al.
ETH Zurich
33

We consider the problem of learning to play a repeated multi-agent game with an unknown reward function. Single player online learning algorithms attain strong regret bounds when provided with full information feedback, which unfortunately is unavailable in many real-world scenarios. Bandit feedback alone, i.e., observing outcomes only for the selected action, yields substantially worse performance. In this paper, we consider a natural model where, besides a noisy measurement of the obtained reward, the player can also observe the opponents' actions. This feedback model, together with a regularity assumption on the reward function, allows us to exploit the correlations among different game outcomes by means of Gaussian processes (GPs). We propose a novel confidence-bound based bandit algorithm GP-MW, which utilizes the GP model for the reward function and runs a multiplicative weight (MW) method. We obtain novel kernel-dependent regret bounds that are comparable to the known bounds in the full information setting, while substantially improving upon the existing bandit results. We experimentally demonstrate the effectiveness of GP-MW in random matrix games, as well as real-world problems of traffic routing and movie recommendation. In our experiments, GP-MW consistently outperforms several baselines, while its performance is often comparable to methods that have access to full information feedback.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/25/2016

Time-Varying Gaussian Process Bandit Optimization

We consider the sequential Bayesian optimization problem with bandit fee...
12/16/2019

Self-Play Learning Without a Reward Metric

The AlphaZero algorithm for the learning of strategy games via self-play...
07/10/2020

Learning to Play Sequential Games versus Unknown Opponents

We consider a repeated sequential game between a learner, who plays firs...
09/03/2010

Gaussian Process Bandits for Tree Search: Theory and Application to Planning in Discounted MDPs

We motivate and analyse a new Tree Search algorithm, GPTS, based on rece...
06/16/2022

Interaction-Grounded Learning with Action-inclusive Feedback

Consider the problem setting of Interaction-Grounded Learning (IGL), in ...
02/04/2014

Online Stochastic Optimization under Correlated Bandit Feedback

In this paper we consider the problem of online stochastic optimization ...
06/09/2020

Stochastic matrix games with bandit feedback

We study a version of the classical zero-sum matrix game with unknown pa...

Code Repositories

noregretgames

Code associated with the paper: No-Regret Learning in Unknown Games with Correlated Payoffs, P.G. Sessa, I. Bogunovic, M. Kamgarpour, A. Krause. NeurIPS 2019.


view repo

1 Introduction

Many real-world problems, such as traffic routing leblanc1975 , market prediction fainmesser2012 , and social network dynamics skyrms2009 , involve multiple learning agents that interact and compete with each other. Such problems can be described as repeated games, in which the goal of every agent is to maximize her cumulative reward. In most cases, the underlying game is unknown to the agents, and the only way to learn about it is by repeatedly playing and observing the corresponding game outcomes.

The performance of an agent in a repeated game is often measured in terms of regret. For example, in traffic routing, the regret of an agent quantifies the reduction in travel time had the agent known the routes chosen by the other agents. No-regret algorithms for playing unknown repeated games exist, and their performance depends on the information available at every round. In the case of full information feedback, the agent observes the obtained reward, as well as the rewards of other non-played actions. While these algorithms attain strong regret guarantees, such full information feedback is often unrealistic in real-world applications. In traffic routing, for instance, agents only observe the incurred travel times and cannot observe the travel times for the routes not chosen.

In this paper, we address this challenge by considering a more realistic feedback model, where at every round of the game, the agent plays an action and observes the noisy reward outcome. In addition to this bandit feedback, the agent also observes the actions played by other agents. Under this feedback model and further regularity assumptions on the reward function, we present a novel no-regret algorithm for playing unknown repeated games. The proposed algorithm alleviates the need for full information feedback while still achieving comparable regret guarantees.

Hedge freund1997 Exp3 auer2003 GP-MW [this paper]
Feedback rewards for all actions obtained reward obtained reward + opponents’ actions
Regret
Table 1: Finite action set regret bounds that depend on the available feedback observed by player at each time step. Time horizon is denoted with , and is the number of actions available to player . Kernel dependent quantity (Eq. (3

)) captures the degrees of freedom in the reward function.

Related Work. In the full information setting, multiplicative-weights (MW) algorithms littlestone1994 such as Hedge freund1997 attain optimal regret, where is the number of actions available to agent . In the case of convex action sets in , and convex and Lipschitz rewards, online convex optimization algorithms attain optimal regret zinkevich2003 . By only assuming Lipschitz rewards and bounded action sets, regret follows from maillard2010 , while in hazan2017 the authors provide efficient gradient-based algorithms with ‘local’ regret guarantees. Full information feedback requires perfect knowledge of the game and is unrealistic in many applications. Our proposed algorithm overcomes this limitation while achieving comparable regret bounds.

In the more challenging bandit setting, existing algorithms have a substantially worse dependence on the size of the action set. For finite actions, Exp3 auer2003 and its variants ensure optimal regret. In the case of convex action sets, and convex and Lipschitz rewards, bandit algorithms attain regret bubeck2017 , while in the case of Lipschitz rewards regret can be obtained slivkins2014 . In contrast, our algorithm works in the noisy bandit setting and requires the knowledge of the actions played by other agents. This allows us to, under some regularity assumptions, obtain substantially improved performance. In Table 1, we summarize the regret and feedback model of our algorithm together with the existing no-regret algorithms.

The previously mentioned online algorithms reduce the unknown repeated game to a single agent problem against an adversarial and adaptive environment that selects a different reward function at every time step cesa-bianchi2006 . A fact not exploited by these algorithms is that in a repeated game, the rewards obtained at different time steps are correlated through a static unknown reward function. In syrgkanis2015 the authors use this fact to show that, if every agent uses a regularized no-regret algorithm, their individual regret grows at a lower rate of , while the sum of their rewards grows only as . In contrast to syrgkanis2015 , we focus on the single-player viewpoint, and we do not make any assumption on opponents strategies111In fact, they are allowed to be adaptive and adversarial.. Instead, we show that by observing opponents’ actions, the agent can exploit the structure of the reward function to reduce her individual regret.

Contributions. We propose a novel no-regret bandit algorithm GP-MW for playing unknown repeated games. GP-MW combines the ideas of the multiplicative weights update method littlestone1994 , with GP upper confidence bounds, a powerful tool used in GP bandit algorithms (e.g., srinivas2010 ; bogunovic2016truncated ). When a finite number of actions is available to player

, we provide a novel high-probability regret bound

, that depends on a kernel-dependent quantity srinivas2010 . For common kernel choices, this results in a sublinear regret bound, which grows only logarithmically in . In the case of infinite action subsets of and Lipschitz rewards, via a discretization argument, we obtain a high-probability regret bound of . We experimentally demonstrate that GP-MW outperforms existing bandit baselines in random matrix games and traffic routing problems. Moreover, we present an application of GP-MW to a novel robust Bayesian optimization setting in which our algorithm performs favourably in comparison to other baselines.

2 Problem Formulation

We consider a repeated static game among non-cooperative agents, or players. Each player has an action set and a reward function . We assume that the reward function is unknown to player . At every time , players simultaneously choose actions and player obtains a reward , which depends on the played action and the actions of all the other players. The goal of player is to maximize the cumulative reward . After time steps, the regret of player is defined as

(1)

i.e., the maximum gain the player could have achieved by playing the single best fixed action in case the sequence of opponents’ actions and the reward function were known in hindsight. An algorithm is no-regret for player if as for any sequence .

First, we consider the case of a finite number of available actions , i.e., . To achieve no-regret, the player should play mixed strategies cesa-bianchi2006

, i.e., probability distributions

over . With full-information feedback, at every time player

observes the vector of rewards

. With bandit feedback, only the reward is observed by the player. Existing full information and bandit algorithms freund1997 ; auer2003 , reduce the repeated game to a sequential decision making problem between player and an adaptive environment that, at each time , selects a reward function . In a repeated game, the reward that player observes at time is a static fixed function of , i.e., , and in many practical settings similar game outcomes lead to similar rewards (see, e.g., the traffic routing application in Section 4.2). In contrast to existing approaches, we exploit such correlations by considering the feedback and reward function models described below.

Feedback model. We consider a noisy bandit feedback model where, at every time , player observes a noisy measurement of the reward where is -sub-Gaussian, i.e., for all , with independence over time. The presence of noise is typical in real-world applications, since perfect measurements are unrealistic, e.g., measured travel times in traffic routing.

Besides the standard noisy bandit feedback, we assume player also observes the played actions of all the other players. In some applications, the reward function depends only indirectly on through some aggregative function . For example, in traffic routing leblanc1975 , represents the congestion in the network’s edges, while in network games jackson2015 , it represents the strategies of player ’s neighbours. In such cases, it is sufficient for the player to observe instead of .

Regularity assumption on rewards. In this work, we assume the unknown reward function has a bounded norm in a reproducing kernel Hilbert space (RKHS) associated with a positive semi-definite kernel function , that satisfies for all . The RKHS norm measures the smoothness of with respect to the kernel function , while the kernel encodes the similarity between two different outcomes of the game . Typical kernel choices are polynomial, Squared Exponential, and Matérn:

where , is the modified Bessel function, and

are kernel hyperparameters

(rasmussen2005, , Section 4). This is a standard smoothness assumption used in kernelized bandits and Bayesian optimization (e.g., srinivas2010 ; chowdhury2017 ). In our context it allows player to use the observed history of play to learn about and predict unseen game outcomes. Our results are not restricted to any specific kernel function, and depending on the application at hand, various kernels can be used to model different types of reward functions. Moreover, composite kernels (see e.g., krause2011 ) can be used to encode the differences in the structural dependence of on and .

It is well known that Gaussian Process models can be used to learn functions with bounded RKHS norm srinivas2010 ; chowdhury2017 . A GP is a probability distribution over functions , specified by its mean and covariance functions and , respectively. Given a history of measurements at points with and , the posterior distribution under a

prior is also Gaussian, with mean and variance functions:

(2)
(3)

where , , and is the kernel matrix.

At time , an upper confidence bound on can be obtained as:

(4)

where is a parameter that controls the width of the confidence bound and ensures , for all and , with high probability srinivas2010 . We make this statement precise in Theorem 3.

Due to the above regularity assumptions and feedback model, player can use the history of play to compute an upper confidence bound of the unknown reward function by using (4). In the next section, we present our algorithm that makes use of to simulate full information feedback.

3 The Gp-Mw Algorithm

We now introduce GP-MW, a novel no-regret bandit algorithm, which can be used by a generic player (see Algorithm 1). GP-MW maintains a probability distribution (or mixed strategy) over and updates it at every time step using a multiplicative-weight (MW) subroutine (see (6)) that requires full information feedback. Since such feedback is not available, GP-MW builds (in (5)) an optimisticestimate of the true reward of every action via the upper confidence bound of . Moreover, since rewards are bounded in , the algorithm makes use of . At every time step , GP-MW plays an action sampled from , and uses the noisy reward observation and actions played by other players to compute the updated upper confidence bound .

Input: Set of actions , GP prior , parameters

1:Initialize:
2:for  do
3:     Sample action
4:     Observe noisy reward and opponents’ actions :
5:     Compute optimistic reward estimates :
(5)
6:     Update mixed strategy:
(6)
7:     Update according to (2)-(3) by appending to the history of play.
8:end for
Algorithm 1 The GP-MW algorithm for player

In Theorem 3, we present a high-probability regret bound for GP-MW while all the proofs of this section can be found in the supplementary material. The obtained bound depends on the maximum information gain, a kernel-dependent quantity defined as:

It quantifies the maximal reduction in uncertainty about after observing outcomes and the corresponding noisy rewards. The result of srinivas2010 shows that this quantity is sublinear in , e.g., in the case of , and in the case of , where is the total dimension of the outcomes , i.e., .

Fix and assume ’s are -sub-Gaussian with independence over time. For any such that , if player plays actions from , , according to GP-MW with and , then with probability at least ,

The proof of this theorem follows by the decomposition of the regret of GP-MW into the sum of two terms. The first term corresponds to the regret that player incurs with respect to the sequence of computed upper confidence bounds. The second term is due to not knowing the true reward function . The proof of Theorem 3 then proceeds by bounding the first term using standard results from adversarial online learning  cesa-bianchi2006 , while the second term is upper bounded by using regret bounding techniques from GP optimization  srinivas2010 ; bogunovic2018 .

Theorem 3 can be made more explicit by substituting bounds on . For instance, in the case of the squared exponential kernel, the regret bound becomes . In comparison to the standard multi-armed bandit regret bound (e.g., auer2003 ), this regret bound does not depend on , similarly to the ideal full information setting.

The case of continuous action sets

In this section, we consider the case when is a (continuous) compact subset of . In this case, further assumptions are required on and to achieve sublinear regret. Hence, we assume a bounded set and to be Lipschitz continuous in . Under the same assumptions, existing regret bounds are and in the full information maillard2010 and bandit setting slivkins2014 , respectively. By using a discretization argument, we obtain a high probability regret bound for GP-MW.

Let and be -sub-Gaussian with independence over time. Assume , , and is -Lipschitz in its first argument, and consider the discretization with such that for every , where is the closest point to in . If player plays actions from according to GP-MW with and , then with probability at least ,

By substituting bounds on , our bound becomes in the case of the SE kernel (for fixed ). Such a bound has a strictly better dependence on than the existing bandit bound from slivkins2014 . Similarly to slivkins2014 ; maillard2010 , the algorithm resulting from Corollary 3 is not efficient in high dimensional settings, as its computational complexity is exponential in .

4 Experiments

In this section, we consider random matrix games and a traffic routing model and compare GP-MW with the existing algorithms for playing repeated games. Then, we show an application of GP-MW to robust BO and compare it with existing baselines on a movie recommendation problem.

4.1 Repeated random matrix games

We consider a repeated matrix game between two players with actions and payoff matrices . At every time step, each player receives a payoff , where indicates the -th entry of matrix . We select and generate random matrices with , where with . We set the noise to , and use . For every game, we distinguish between two settings:

Against random opponent. In this setting, player-2 plays actions uniformly at random from at every round , while player-1 plays according to a no-regret algorithm. In fig:random_opponent, we compare the time-averaged regret of player-1 when playing according to Hedge freund1997 , Exp3.P auer2003 , and GP-MW. Our algorithm is run with the true function prior while Hedge receives (unrealistic) noiseless full information feedback (at every round ) and leads to the lowest regret. When only the noisy bandit feedback is available, GP-MW significantly outperforms Exp3.P.

GP-MW vs Exp3.P. Here, player-1 plays according to GP-MW while player-2 is an adaptive adversary and plays using Exp3.P. In Figure 0(b), we compare the regret of the two players averaged over the game instances. GP-MW outperforms Exp3.P and ensures player-1 a smaller regret.

(b) GP-MW vs. Exp3.P.
(a) Against random opponent
Figure 1: GP-MW leads to smaller regret compared to Exp3.P. Hedge is an idealized benchmark which upper bounds the achievable performance. Shaded areas represent

one standard deviation.

(a) Against random opponent

4.2 Repeated traffic routing

We consider the Sioux-Falls road network leblanc1975 ; website_transp_test , a standard benchmark model in the transportation literature. The network is a directed graph with 24 nodes and 76 edges (). In this experiment, we have agents and every agent seeks to send some number of units from a given origin to a given destination node. To do so, agent can choose among possible routes consisting of network edges . A route chosen by agent corresponds to action with in case belongs to the route and otherwise. The goal of each agent is to minimize the travel time weighted by the number of units . The travel time of an agent is unknown and depends on the total occupancy of the traversed edges within the chosen route. Hence, the travel time increases when more agents use the same edges. The number of units for every agent, as well as travel time functions for each edge, are taken from leblanc1975 ; website_transp_test . A more detailed description of our experimental setup is provided in Appendix C.

We consider a repeated game, where agents choose routes using either of the following algorithms:

  • [leftmargin=1em]

  • Hedge. To run Hedge, each agent has to observe the travel time incurred had she chosen any different route. This requires knowing the exact travel time functions. Although these assumptions are unrealistic, we use Hedge as an idealized benchmark.

  • Exp3.P. In the case of Exp3.P, agents only need to observe their incurred travel time. This corresponds to the standard bandit feedback.

  • GP-MW. Let be the total occupancy (by other agents) of edges at time . To run GP-MW, agent needs to observe a noisy measurement of the travel time as well as the corresponding .

  • Q-BRI (Q-learning Better Replies with Inertia algorithm monderer1996 ). This algorithm requires the same feedback as GP-MW and is proven to asymptotically converge to a Nash equilibrium (as the considered game is a potential game chapman2013 ). We use the same set of algorithm parameters as in chapman2013 .

For every agent to run GP-MW, we use a composite kernel such that for every , , where is a linear kernel and is a polynomial kernel of degree .

Figure 2: GP-MW leads to a significantly smaller average regret compared to Exp3.P and Q-BRI and improves the overall congestion in the network. Hedge represents an idealized full information benchmark which upper bounds the achievable performance.

First, we consider a random subset of agents that we refer to as learning agents. These agents choose actions (routes) according to the aforementioned no-regret algorithms for game rounds. The remaining non-learning agents simply choose the shortest route, ignoring the presence of the other agents. In Figure 2 (top plots), we compare the average regret (expressed in hours) of the learning agents when they use the different no-regret algorithms. We also show the associated average congestion in the network (see (13) in Appendix C for a formal definition). When playing according to GP-MW, agents incur significantly smaller regret and the overall congestion is reduced in comparison to Exp3.P and Q-BRI.

In our second experiment, we consider the same setup as before, but we vary the number of learning agents. In Figure 2 (bottom plots), we show the final (when ) average regret and congestion as a function of the number of learning agents. We observe that GP-MW systematically leads to a smaller regret and reduced congestion in comparison to Exp3.P and Q-BRI. Moreover, as the number of learning agents increases, both Hedge and GP-MW reduce the congestion in the network, while this is not the case with Exp3.P or Q-BRI (due to a slower convergence).

4.3 Gp-Mw and robust Bayesian Optimization

In this section, we apply GP-MW to a novel robust Bayesian Optimization (BO) setting, similar to the one considered in bogunovic2018 . The goal is to optimize an unknown function (under the same regularity assumptions as in Section 2) from a sequence of queries and corresponding noisy observations. Very often, the actual queried points may differ from the selected ones due to various input perturbations, or the function may depend on external parameters that cannot be controlled (see bogunovic2018 for examples).

This scenario can be modelled via a two player repeated game, where a player is competing against an adversary. The unknown reward function is given by . At every round of the game, the player selects a point , and the adversary chooses . The player then observes the parameter and a noisy estimate of the reward: . After time steps, the player incurs the regret

Note that both the regret definition and feedback model are the same as in Section 2.

In the standard (non-adversarial) Bayesian optimization setting, the GP-UCB algorithm srinivas2010 ensures no-regret. On the other hand, the StableOpt algorithm bogunovic2018 attains strong regret guarantees against the worst-case adversary which perturbs the final reported point . Here instead, we consider the case where the adversary is adaptive at every time , i.e., it can adapt to past selected points . We note that both GP-UCB and StableOpt fail to achieve no-regret in this setting, as both algorithms are deterministic conditioned on the history of play. On the other hand, GP-MW is a no-regret algorithm in this setting according to Theorem 3 (and Corollary 3).

Next, we demonstrate these observations experimentally in a movie recommendation problem.

Movie recommendation. We seek to recommend movies to users according to their preferences. A priori it is unknown which user will see the recommendation at any time . We assume that such a user is chosen arbitrarily (possibly adversarially), simultaneously to our recommendation.

We use the MovieLens-100K dataset movielens which provides a matrix of ratings for movies rated by users. We apply non-negative matrix factorization with latent factors on the incomplete rating matrix and obtain feature vectors for movies and users, respectively. Hence, represents the rating of movie by user . At every round , the player selects , the adversary chooses (without observing ) a user index , and the player receives reward . We model via a GP with composite kernel where is a linear kernel and is a diagonal kernel.

(b) Users chosen by adaptive adversary.
(a) Users chosen at random.
Figure 3: GP-MW ensures no-regret against both randomly and adaptively chosen users, while GP-UCB and StableOpt attain constant average regret.
(a) Users chosen at random.

We compare the performance of GP-MW against the ones of GP-UCB and StableOpt when sequentially recommending movies. In this experiment, we let GP-UCB select , while StableOpt chooses at every round . Both algorithms update their posteriors with measurements at with in the case of GP-UCB and for StableOpt. Here, represents a lower confidence bound on (see bogunovic2018 for details).

In Figure 2(a), we show the average regret of the algorithms when the adversary chooses users uniformly at random at every . In our second experiment (Figure 2(b)), we show their performance when the adversary is adaptive and selects according to the Hedge algorithm. We observe that in both experiments GP-MW is no-regret, while the average regrets of both GP-UCB and StableOpt do not vanish.

5 Conclusions

We have proposed GP-MW, a no-regret bandit algorithm for playing unknown repeated games. In addition to the standard bandit feedback, the algorithm requires observing the actions of other players after every round of the game. By exploiting the correlation among different game outcomes, it computes upper confidence bounds on the rewards and uses them to simulate unavailable full information feedback. Our algorithm attains high probability regret bounds that can substantially improve upon the existing bandit regret bounds. In our experiments, we have demonstrated the effectiveness of GP-MW on synthetic games, and real-world problems of traffic routing and movie recommendation.

Acknowledgments

This work was gratefully supported by Swiss National Science Foundation, under the grant SNSF _, and by the European Union’s Horizon 2020 ERC grant .

References

  • [1] Transportation network test problems. http://www.bgu.ac.il/ bargera/tntp/.
  • [2] Yasin Abbasi-Yadkori. Online Learning for Linearly Parametrized Control Problems. PhD thesis, Edmonton, Alta., Canada, 2012.
  • [3] Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77, January 2003.
  • [4] Ilija Bogunovic, Jonathan Scarlett, Stefanie Jegelka, and Volkan Cevher. Adversarially robust optimization with gaussian processes. In Neural Information Processing Systems (NeurIPS), 2018.
  • [5] Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, and Volkan Cevher. Truncated variance reduction: A unified approach to bayesian optimization and level-set estimation. In Neural Information Processing Systems (NeurIPS), 2016.
  • [6] Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan. Kernel-based methods for bandit convex optimization. In

    Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing

    , STOC 2017, pages 72–85, 2017.
  • [7] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006.
  • [8] Archie C. Chapman, David S. Leslie, Alex Rogers, and Nicholas R. Jennings. Convergent learning algorithms for unknown reward games. SIAM J. Control and Optimization, 51(4):3154–3180, 2013.
  • [9] Sayak Ray Chowdhury and Aditya Gopalan. On kernelized multi-armed bandits. In

    International Conference on Machine Learning (ICML)

    , 2017.
  • [10] Itay P. Fainmesser. Community structure and market outcomes: A repeated games-in-networks approach. American Economic Journal: Microeconomics, 4(1):32–69, February 2012.
  • [11] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119 – 139, 1997.
  • [12] F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4):19:1–19:19, December 2015.
  • [13] Elad Hazan, Karan Singh, and Cyril Zhang. Efficient regret minimization in non-convex games. In International Conference on Machine Learning (ICML), 2017.
  • [14] Larry J. Leblanc. An algorithm for the discrete network design problem. Transportation Science, 9:183–199, 08 1975.
  • [15] Matthew O. Jackson and Yves Zenou. Games on networks. In

    Handbook of Game Theory with Economic Applications

    , volume 4, chapter 3, pages 95–163. Elsevier, 2015.
  • [16] Andreas Krause and Cheng S. Ong. Contextual gaussian process bandit optimization. In Neural Information Processing Systems (NeurIPS). 2011.
  • [17] N. Littlestone and M.K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212 – 261, 1994.
  • [18] Odalric-Ambrym Maillard and Rémi Munos. Online learning in adversarial lipschitz environments. In Machine Learning and Knowledge Discovery in Databases, pages 305–320, 2010.
  • [19] Dov Monderer and Lloyd S. Shapley. Potential games. Games and Economic Behavior, 14(1):124 – 143, 1996.
  • [20] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005.
  • [21] Brian Skyrms and Robin Pemantle. A dynamic model of social network formation. Adaptive Networks: Theory, Models and Applications, pages 231–251, 2009.
  • [22] Aleksandrs Slivkins. Contextual bandits with similarity information. Journal of Machine Learning Research, 15:2533–2568, 2014.
  • [23] Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In International Conference on Machine Learning (ICML), 2010.
  • [24] Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E. Schapire. Fast convergence of regularized learning in games. In Neural Information Processing Systems (NeurIPS), 2015.
  • [25] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In International Conference on Machine Learning (ICML), 2003.

Appendix A Proof of thm:1

We make use of the following well-known confidence lemma. [Confidence Lemma] Let be a RKHS with underlying kernel function . Consider an unknown function in such that , and the sampling model where is -sub-Gaussian (with independence between times). By setting

the following holds with probability at least :

where and are given in (2)-(3). Lemma A follows directly from [2, Theorem 3.11 and Remark 3.13] as well as the definition of the maximum information gain .

We can now prove Theorem 3. Recall the definition of regret

Defining , can be rewritten as

By Lemma A and since rewards are in , with probability the true unknown reward function can be upper and lower bounded as:

(7)

with defined in (4) and chosen according to Theorem 3. Thus, is a lower confidence bound of .

Hence,

where the first inequality follows by (7) and the second one since is increasing in .

Moreover, by [23, Lemma 5.4] and the choice , we have

Next, we show that with probability ,

(8)

The statement of the theorem then follows by standard probability arguments:

where and are the events (7) and (8), respectively.

To show (8), define the function . Note that if (7) holds, since , hence . Using such definition, the left hand side of (8) can be upper bounded as:

(9)

Observe that the right hand side of (9) is precisely the regret which player incurrs in an adversarial online learning problem with reward functions . The actions , moreover, are exactly chosen by the Hedge [11] algorithm which receives the full information feedback . Note that the original version of Hedge works with losses instead of rewards, but the same happens in GP-MW since the mixed strategies are updated with . Therefore, by [7, Corollary 4.2], with probability ,

Note that according to [7, Remark 4.3], the functions can be chosen by an adaptive adversary depending on past actions , but not on the current action . This applies to our setting, since depends only on and not on . ∎

Appendix B Proof of Corollary 3

A function is Lipschitz continuous with constant (or -Lipschitz) if

Define . The fact that is -Lispschitz in its first argument implies that

(10)

Moreover, recall the discrete set with such that , where is the closest point to in . An example of such a set can be obtained for instance by a uniform grid of points in .

As in the proof of Theorem 3, let . Moreover, let be the closest point to in . We have:

We prove the corollary by bounding and separately.

By the Lipschitz property (10) of , and by construction of , we have that

(11)

Hence, by (11),

To bound , note that