No-Regret Learning in Dynamic Stackelberg Games

02/10/2022
by   Niklas Lauffer, et al.
0

In a Stackelberg game, a leader commits to a randomized strategy, and a follower chooses their best strategy in response. We consider an extension of a standard Stackelberg game, called a discrete-time dynamic Stackelberg game, that has an underlying state space that affects the leader's rewards and available strategies and evolves in a Markovian manner depending on both the leader and follower's selected strategies. Although standard Stackelberg games have been utilized to improve scheduling in security domains, their deployment is often limited by requiring complete information of the follower's utility function. In contrast, we consider scenarios where the follower's utility function is unknown to the leader; however, it can be linearly parameterized. Our objective then is to provide an algorithm that prescribes a randomized strategy to the leader at each step of the game based on observations of how the follower responded in previous steps. We design a no-regret learning algorithm that, with high probability, achieves a regret bound (when compared to the best policy in hindsight) which is sublinear in the number of time steps; the degree of sublinearity depends on the number of features representing the follower's utility function. The regret of the proposed learning algorithm is independent of the size of the state space and polynomial in the rest of the parameters of the game. We show that the proposed learning algorithm outperforms existing model-free reinforcement learning approaches.

READ FULL TEXT

page 1

page 13

research
10/07/2022

Learning Stackelberg Equilibria and Applications to Economic Design Games

We study the use of reinforcement learning to learn the optimal leader's...
research
06/11/2020

Optimally Deceiving a Learning Leader in Stackelberg Games

Recent results in the ML community have revealed that learning algorithm...
research
11/28/2022

Provably Efficient Model-free RL in Leader-Follower MDP with Linear Function Approximation

We consider a multi-agent episodic MDP setup where an agent (leader) tak...
research
01/27/2023

Online Learning in Stackelberg Games with an Omniscient Follower

We study the problem of online learning in a two-player decentralized co...
research
11/17/2018

The Impatient May Use Limited Optimism to Minimize Regret

Discounted-sum games provide a formal model for the study of reinforceme...
research
01/08/2021

Adaptive Learning in Two-Player Stackelberg Games with Application to Network Security

We study a two-player Stackelberg game with incomplete information such ...
research
12/22/2022

Commitment with Signaling under Double-sided Information Asymmetry

Information asymmetry in games enables players with the information adva...

Please sign up or login with your details

Forgot password? Click here to reset