Shapley Q-value: A Local Reward Approach to Solve Global Reward Games

07/11/2019
by   Jianhong Wang, et al.
0

Cooperative game is a critical research area in multi-agent reinforcement learning (MARL). Global reward game is a subclass of cooperative games, where all agents aim to maximize cumulative global rewards. Credit assignment is an important problem studied in the global reward game. Most of works stand by the view of non-cooperative-game theoretical framework with the shared reward approach, i.e., each agent being assigned a shared global reward directly. This, however, may give each agent an inaccurate feedback on its contribution to the group. In this paper, we introduce a cooperative-game theoretical framework and extend it to the infinite-horizon case. We show that our proposed framework is a superset of the global reward game. Based on this framework, we propose a local reward approach called Shapley Q-value that can distribute the cumulative global rewards fairly, reflecting each agent's own contribution in contrast to the shared reward approach. Moreover, we derive an MARL algorithm called Shapley Q-value policy gradient (SQPG), using Shapley Q-value as critics. We evaluate SQPG on Cooperative Navigation, Prey-and-Predator and Traffic Junction, compared with MADDPG, COMA, Independent A2C and Independent DDPG. In the experiments, SQPG shows the better performance than the baselines. In addition, we also plot the Shapley Q-value and validate the property of fairly distributing the global rewards.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset