A Reinforcement Learning Based Approach for Joint Multi-Agent Decision Making

by   Mridul Agarwal, et al.

Reinforcement Learning (RL) is being increasingly applied to optimize complex functions that may have a stochastic component. RL is extended to multi-agent systems to find policies to optimize systems that require agents to coordinate or to compete under the umbrella of Multi-Agent RL (MARL). A crucial factor in the success of RL is that the optimization problem is represented as the expected sum of rewards, which allows the use of backward induction for the solution. However, many real-world problems require a joint objective that is non-linear and dynamic programming cannot be applied directly. For example, in a resource allocation problem, one of the objective is to maximize long-term fairness among the users. This paper addresses and formalizes the problem of joint objective optimization, where not only the sum of rewards of each agent but a function of the sum of rewards of each agent needs to be optimized. The proposed algorithms at the centralized controller aims to learn the policy to dictate the actions for each agent such that the joint objective function based on average per step rewards of each agent is maximized. We propose both model-based and model-free algorithms, where the model-based algorithm is shown to achieve O(√(K/T)) regret bound for K agents over a time-horizon T, and the model-free algorithm can be implemented using deep neural networks. Further, using fairness in cellular base-station scheduling as an example, the proposed algorithms are shown to significantly outperform the state-of-the-art approaches.


page 1

page 2

page 3

page 4


Achieving Fairness in Multi-Agent Markov Decision Processes Using Reinforcement Learning

Fairness plays a crucial role in various multi-agent systems (e.g., comm...

Learning to Incentivize Other Learning Agents

The challenge of developing powerful and general Reinforcement Learning ...

Multi-step Greedy Policies in Model-Free Deep Reinforcement Learning

Multi-step greedy policies have been extensively used in model-based Rei...

On Hard Exploration for Reinforcement Learning: a Case Study in Pommerman

How to best explore in domains with sparse, delayed, and deceptive rewar...

Task-Effective Compression of Observations for the Centralized Control of a Multi-agent System Over Bit-Budgeted Channels

We consider a task-effective quantization problem that arises when multi...

Variational Inference for Model-Free and Model-Based Reinforcement Learning

Variational inference (VI) is a specific type of approximate Bayesian in...

Model-Free Optimal Control of Linear Multi-Agent Systems via Decomposition and Hierarchical Approximation

Designing the optimal linear quadratic regulator (LQR) for a large-scale...

Please sign up or login with your details

Forgot password? Click here to reset