Mean-Field Control based Approximation of Multi-Agent Reinforcement Learning in Presence of a Non-decomposable Shared Global State
Mean Field Control (MFC) is a powerful approximation tool to solve large-scale Multi-Agent Reinforcement Learning (MARL) problems. However, the success of MFC relies on the presumption that given the local states and actions of all the agents, the next (local) states of the agents evolve conditionally independent of each other. Here we demonstrate that even in a MARL setting where agents share a common global state in addition to their local states evolving conditionally independently (thus introducing a correlation between the state transition processes of individual agents), the MFC can still be applied as a good approximation tool. The global state is assumed to be non-decomposable i.e., it cannot be expressed as a collection of local states of the agents. We compute the approximation error as 𝒪(e) where e=1/√(N)[√(|𝒳|) +√(|𝒰|)]. The size of the agent population is denoted by the term N, and |𝒳|, |𝒰| respectively indicate the sizes of (local) state and action spaces of individual agents. The approximation error is found to be independent of the size of the shared global state space. We further demonstrate that in a special case if the reward and state transition functions are independent of the action distribution of the population, then the error can be improved to e=√(|𝒳|)/√(N). Finally, we devise a Natural Policy Gradient based algorithm that solves the MFC problem with 𝒪(ϵ^-3) sample complexity and obtains a policy that is within 𝒪(max{e,ϵ}) error of the optimal MARL policy for any ϵ>0.
READ FULL TEXT