Logarithmic Regret for Matrix Games against an Adversary with Noisy Bandit Feedback
This paper considers a variant of zero-sum matrix games where at each timestep the row player chooses row i, the column player chooses column j, and the row player receives a noisy reward with mean A_i,j. The objective of the row player is to accumulate as much reward as possible, even against an adversarial column player. If the row player uses the EXP3 strategy, an algorithm known for obtaining √(T) regret against an arbitrary sequence of rewards, it is immediate that the row player also achieves √(T) regret relative to the Nash equilibrium in this game setting. However, partly motivated by the fact that the EXP3 strategy is myopic to the structure of the game, O'Donoghue et al. (2021) proposed a UCB-style algorithm that leverages the game structure and demonstrated that this algorithm greatly outperforms EXP3 empirically. While they showed that this UCB-style algorithm achieved √(T) regret, in this paper we ask if there exists an algorithm that provably achieves polylog(T) regret against any adversary, analogous to results from stochastic bandits. We propose a novel algorithm that answers this question in the affirmative for the simple 2 × 2 setting, providing the first instance-dependent guarantees for games in the regret setting. Our algorithm overcomes two major hurdles: 1) obtaining logarithmic regret even though the Nash equilibrium is estimable only at a 1/√(T) rate, and 2) designing row-player strategies that guarantee that either the adversary provides information about the Nash equilibrium, or the row player incurs negative regret. Moreover, in the full information case we address the general n × m case where the first hurdle is still relevant. Finally, we show that EXP3 and the UCB-based algorithm necessarily cannot perform better than √(T).
READ FULL TEXT