Log In Sign Up

Deep reinforcement learning of event-triggered communication and control for multi-agent cooperative transport

by   Kazuki Shibata, et al.

In this paper, we explore a multi-agent reinforcement learning approach to address the design problem of communication and control strategies for multi-agent cooperative transport. Typical end-to-end deep neural network policies may be insufficient for covering communication and control; these methods cannot decide the timing of communication and can only work with fixed-rate communications. Therefore, our framework exploits event-triggered architecture, namely, a feedback controller that computes the communication input and a triggering mechanism that determines when the input has to be updated again. Such event-triggered control policies are efficiently optimized using a multi-agent deep deterministic policy gradient. We confirmed that our approach could balance the transport performance and communication savings through numerical simulations.


page 1

page 6


Event-Triggered Multi-agent Reinforcement Learning with Communication under Limited-bandwidth Constraint

Communicating with each other in a distributed manner and behaving as a ...

Consolidation via Policy Information Regularization in Deep RL for Multi-Agent Games

This paper introduces an information-theoretic constraint on learned pol...

MACC: Cross-Layer Multi-Agent Congestion Control with Deep Reinforcement Learning

Congestion Control (CC), as the core networking task to efficiently util...

Strategic bidding in freight transport using deep reinforcement learning

This paper presents a multi-agent reinforcement learning algorithm to re...

Smart Containers With Bidding Capacity: A Policy Gradient Algorithm for Semi-Cooperative Learning

Smart modular freight containers – as propagated in the Physical Interne...

Multi-agent Databases via Independent Learning

Machine learning is rapidly being used in database research to improve t...