Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games

06/06/2021
by   Kai Wang, et al.
0

A growing body of work in game theory extends the traditional Stackelberg game to settings with one leader and multiple followers who play a Nash equilibrium. Standard approaches for computing equilibria in these games reformulate the followers' best response as constraints in the leader's optimization problem. These reformulation approaches can sometimes be effective, but often get trapped in low-quality solutions when followers' objectives are non-linear or non-quadratic. Moreover, these approaches assume a unique equilibrium or a specific equilibrium concept, e.g., optimistic or pessimistic, which is a limiting assumption in many situations. To overcome these limitations, we propose a stochastic gradient descent–based approach, where the leader's strategy is updated by differentiating through the followers' best responses. We frame the leader's optimization as a learning problem against followers' equilibrium, which allows us to decouple the followers' equilibrium constraints from the leader's problem. This approach also addresses cases with multiple equilibria and arbitrary equilibrium selection procedures by back-propagating through a sampled Nash equilibrium. To this end, this paper introduces a novel concept called equilibrium flow to formally characterize the set of equilibrium selection processes where the gradient with respect to a sampled equilibrium is an unbiased estimate of the true gradient. We evaluate our approach experimentally against existing baselines in three Stackelberg problems with multiple followers and find that in each case, our approach is able to achieve higher utility for the leader.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset