Average submodularity of maximizing anticoordination in network games

07/01/2022
by   Soham Das, et al.
0

We consider the control of decentralized learning dynamics for agents in an anti-coordination network game. In the anti-coordination network game, there is a preferred action in the absence of neighbors' actions, and the utility an agent receives from the preferred action decreases as more of its neighbors select the preferred action, potentially causing the agent to select a less desirable action. The decentralized dynamics that is based on the iterated elimination of dominated strategies converge for the considered game. Given a convergent action profile, we measure anti-coordination by the number of edges in the underlying graph that have at least one agent in either end of the edge not taking the preferred action. The maximum anti-coordination (MAC) problem seeks to find an optimal set of agents to control under a finite budget so that the overall network disconnect is maximized on game convergence as a result of the dynamics. We show that the MAC is submodular in expectation in dense bipartite networks for any realization of the utility constants in the population. Utilizing this result, we obtain a performance guarantee for the greedy agent selection algorithm for MAC. Finally, we provide a computational study to show the effectiveness of greedy node selection strategies to solve MAC on general bipartite networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset