Hierarchical Reinforcement Learning Framework towards Multi-agent Navigation
This paper proposes a navigation algorithm ori- ented to multi-agent dynamic environment. The algorithm is expressed as a hierarchical framework which contains a Hidden Markov Model (HMM) and Deep Reinforcement Learning (DRL). For simplification, we term our method Hierarchical Navigation Reinforcement Network (HNRN). In high-level architecture, we train an HMM to evaluate agents environment in order to obtain a score. According to this score, adaptive control action will be chosen. While in low-level architecture, two sub-systems are introduced, one is a differential target-driven system, which aims at heading to the target, the other is collision avoidance DRL system, which is used for avoiding obstacles in the dynamic environment. The advantage of this hierarchical system is to decouple the target-driven and collision avoidance tasks, leading to a faster and easier model to be trained. As the experiments manifest, our algorithm has faster learning efficiency and a higher success rate than traditional Velocity Obstacle (VO) algorithms and hybrid DRL method.
READ FULL TEXT