Q-Mixing Network for Multi-Agent Pathfinding in Partially Observable Grid Environments

08/13/2021
by   Vasilii Davydov, et al.
0

In this paper, we consider the problem of multi-agent navigation in partially observable grid environments. This problem is challenging for centralized planning approaches as they, typically, rely on the full knowledge of the environment. We suggest utilizing the reinforcement learning approach when the agents, first, learn the policies that map observations to actions and then follow these policies to reach their goals. To tackle the challenge associated with learning cooperative behavior, i.e. in many cases agents need to yield to each other to accomplish a mission, we use a mixing Q-network that complements learning individual policies. In the experimental evaluation, we show that such approach leads to plausible results and scales well to large number of agents.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

11/16/2020

Scalable Reinforcement Learning Policies for Multi-Agent Control

This paper develops a stochastic Multi-Agent Reinforcement Learning (MAR...
06/22/2022

POGEMA: Partially Observable Grid Environment for Multiple Agents

We introduce POGEMA (https://github.com/AIRI-Institute/pogema) a sandbox...
10/03/2021

Mixed Observable RRT: Multi-Agent Mission-Planning in Partially Observable Environments

This paper considers centralized mission-planning for a heterogeneous mu...
06/16/2021

Learned Belief Search: Efficiently Improving Policies in Partially Observable Settings

Search is an important tool for computing effective policies in single- ...
08/02/2016

Context Discovery for Model Learning in Partially Observable Environments

The ability to learn a model is essential for the success of autonomous ...
06/22/2019

A neurally plausible model learns successor representations in partially observable environments

Animals need to devise strategies to maximize returns while interacting ...
09/05/2021

Soft Hierarchical Graph Recurrent Networks for Many-Agent Partially Observable Environments

The recent progress in multi-agent deep reinforcement learning(MADRL) ma...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Planning the coordinated movement for a group of mobile agents is usually considered as a standalone problem within the behavior planning. Two classes of methods for solving this problem can be distinguished: centralized and decentralized. The methods of the first group are based on the assumption of the existence of a control center, which has access to complete information about the states and movements of agents at any time. In most cases, such methods are based either on reducing the multi-agent planning to other well-known problems, e.g. the boolean satisfiability (SAT-problems) [surynek2016efficient]

, or on heuristic search. Among the latter, algorithms of the conflict-based search (CBS) family are actively developing nowadays. Original CBS algorithm 

[sharon2015conflict] guarantees completeness and optimality. Many enhancements of CBS exist that significantly improve its performance while preserving the theoretical guarantees – ICBS [boyarski2015icbs], CBSH [felner2018adding] etc. Other variants of CBS, such as the ones that take the kinematic constraints [andreychuk2020multi] into account, target bounded-suboptimal solutions [barer2014suboptimal] are also known. Another widespread approach to centralized multi-agent navigation is prioritized planning [yakovlev2019prioritized]. Prioritized planners are extremely fast in practice, but they are incomplete in general. However, when certain conditions have been met the guarantee that any problem will be solved by a prioritized planning method can be provided [cap2015a]. In practice these conditions are often met in the warehouse robotics domains, therefore, prioritized methods are actively used for logistics applications.

Methods of the second class, decentralized, assume that agents are controlled internally and their observations and/or communication capabilities are limited, e.g. they do not have direct access to the information regarding other agent’s plans. These approaches are naturally better suited to the settings when only partial knowledge of the environment is available. In this work we focus on one such setting, i.e. we assume that each agent has a limited field of view and can observe only a limited myopic fragment of the environment.

Among the methods for decentralized navigation, one of the most widely used is the ORCA algorithm [van2011reciprocal] and its numerous variants. These algorithms at each time step compute the current speed via the construction of the so-called velocity obstacle space. When certain conditions are met, ORCA guarantees that the collision between the agents is avoided, however, there is no guarantee that each agent will reach its goal. In practice, when navigating in a confined space (e.g. indoor environments with narrow corridors, passages, etc.), agents often find themselves in a dead-lock, when they reduce their speed to zero to avoid collisions and stop moving towards goals. It is also important to note that the algorithms of the ORCA family assume that velocity profiles of the neighboring agents are known. In the presented work, such an assumption is not made and it is proposed to use learning methods, in particular – reinforcement learning methods, to solve the considered problem.

Application of the reinforcement learning algorithms for path planning in partially observable environments is not new [Shikunov2019, Martinson2020]. In [panov2018grid] the authors consider the single-agent case of a partially observable environment and apply the deep Q-network [mnih2015human] (DQN) algorithm to solve it.

In [sartoretti2019primal]

the multi-agent setting is considered. The authors suggest using the neural network approximator that fits parameters using one of the classic deep reinforcement learning algorithms. However, the full-fledged operation of the algorithm is possible only when the agent’s experience is complimented with the expert data generated by the state-of-the-art centralized multi-agent solvers. The approach does not use maximization of the general Q-function but tries to solve the problem of multi-agent interaction by introducing various heuristics: an additional loss function for blocking other agents; a reward function that takes into account the collision of the agents; encoding other agents’ goals in the observation.

In this work we propose to use a mixing Q-network, which implements the principle of monotonic improvement of the overall assessment of the value of the current state based on the current assessments of the value of the state of individual agents. Learning the mixing mechanism based on a parameterized approximator allows to automatically generate rules for resolving conflict patterns when two or more agents pass intersecting path sections. We also propose the implementation of a flexible and efficient experimental environment with trajectory planning for a group of agents with limited observations. The configurability and the possibility of implementing various behavioral scenarios by changing the reward function allow us to compare the proposed method with both classical methods of multi-agent path planning and with reinforcement learning methods designed for training one agent.

2 Problem statement

We reduce the multi-agent pathfinding problem in partially observable environment to a single-agent pathfinding with dynamic obstacles (other agents), as we assume decentralized scenario (i.e. each agent is controlled separately). The process of interaction between the agent and the environment is modeled by the partially observable Markov decision process (POMDP), which is a variant of the (plain) Markov decision process. We sought a policy for this POMDP, i.e. a function that maps the agent’s observations to actions. To construct such a function we utilize reinforcement learning.

Markov decision process is described as the tuple . At every step the environment is assumed to be at the certain state and the agent has to decide which action to perform next. After picking and performing the action the agent receives a reward via the reward function . The environment is (stochastically) transitioned to the next state . The agent chooses its action based on the policy which is a function . In many cases, it is preferable to choose the action that maximizes the Q-function of the state-action pair: , where is the given discount factor.

The POMDP differs from the described setting in that a state of the environment is not fully known to the agent. Instead it is able to receive only a partial observation of the state. Thus POMDP is a tuple . The policy now is a mapping from observations to actions: . Q-function in this setting is also dependent on observation rather than on state: .

In this work we study the case when the environment is modeled as a grid composed of the free and blocked cells. The actions available for an agent are: move left/up/down/right (to a neighboring cell), stay in the current cell. We assume that transitions are non-stochastic, i.e. each action is executed perfectly and no outer disturbances that affect the transition are present. The task of an agent is to construct a policy that picks actions to reach the agent’s goal while avoiding collisions with both static and dynamic obstacles (other agents). In this problem, the state describes the locations of all obstacles, agents, and goals. Observation contains the information about the location of obstacles, agents, and goals in the vicinity of the agent (thus, the agent sees only its nearby surrounding).

3 Method

In this work, we propose an original architecture for decision-making and agent training based on a mixing Q-network that uses deep neural network to parameterize the value function by analogy with deep Q-learning (DQN).

In deep Q-learning, the parameters of the neural network are optimized . Parameters are updated for mini-batches of the agent’s experience data, consisting of sets , where is the state in which the agent moved after executing action in the state .

The loss function for the approximator is:

b is a batch size.

In the transition to multi-agent reinforcement learning, one of the options for implementing the learning process is independent execution. In this approach, agents optimize their own Q-functions for the actions of a single agent. This approach differs from DQN in the process of updating the parameters of the approximator when agents use the information received from other agents. In fact, agents decompose the value function (VDN) [Sunehag2017] and aim to maximize the total Q-function , which is the sum of the Q-functions of each individual agent .

The Mixing Q-Network (QMIX) [Rashid2018] algorithm works similarly to VDN. However, in this case, to calculate , a new parameterized function of all Q-values of agents is used. More precisely, is calculated to satisfy the condition that increases monotonically with increasing :

is parameterized using a so-called mixing neural Q-network. The weights for this network are generated using the hyper networks [Ha2016]

. Each of the hyper networks consists of a single fully connected layer with an absolute activation function that guarantees non-negative values in the weights of the mixing network. Biases for the mixing network are generated similarly, however they can be negative. The final bias of the mixing network is generated using a two-layer hyper network with ReLU activation.

The peculiarities of the mixing network operation also include the fact that the agent’s state or observation is not fed into the network, since is not obliged to increase monotonically when changing the state parameters . Individual functions receive only observation as input, which can only partially reflect the general state of the environment. Since the state can contain useful information for training, it must be used when calculating , so the state is fed as the input of the hyper network. Thus, indirectly depends on the state of the environment and combines all Q-functions of agents. The mixing network architecture is shown in Figure 1.

The mixing network loss function looks similar to the loss function for DQN:

Here is the batch size, is the action to be performed at the next step after receiving the reward , is the observation obtained at the next step, is the state obtained in the next step. are the parameters of the copy of the mixing Q-network created to stabilize the target variable.

Figure 1: a) Mixing network architecture. , , , are the weights of the mixing network; , are the agents’ Q values; is the environment state; is a common Q Value; b) Hyper network architecture for generating the weights matrix of the mixing Q-network. The hyper network consists of a single fully connected layer and an absolute activation function. c) Hyper network architecture for generating the biases of the mixing Q-network. The hyper network consists of a single fully connected layer.

4 Experimental environment for multi-agent pathfinding

The environment is a grid with agents, their goals, and obstacles located on it. Each agent needs to get to his goal, avoiding obstacles and other agents. An example of an environment showing partial observability for a single agent is shown in Figure 2. This figure also shows an example of a multi-agent environment.

Figure 2: The left figure shows an example of partial observability for a single agent environment: gray vertices are free cells along the edges of which the agent can move; a filled red circle indicates the position of the agent; the vertex with a red outline is the target of this agent, vertex with red border - projection of the agent’s goal. The area that the agent cannot see is shown as transparent. The right figure shows an example of an environment for 8 agents, projections of agents’ goals and partial observability are not shown for visibility purposes.

The input parameters for generating the environment are:

  • field size ,

  • obstacle density ,

  • number of agents in the environment ,

  • observation radius: agents get cells in each direction,

  • the maximum number of steps in the environment before ending ,

  • the distance to the goal for each agent (is an optional parameter, if it is not set, the distance to the goal for each agent is generated randomly).

Obstacle matrix is filled randomly by parameters and . The positions of agents and their goals are also generated randomly, but with a guarantee of reachability.

The observation space of each agent is a multidimensional matrix: , which includes the following 4 matrices. Obstacle matrix: encodes an obstacle, and encodes its absence. If any cell of the agent’s field of view is outside the environment, then it is encoded . Agents’ positions matrix: encodes other agent in the cell, and encodes his absence. The value in the center is inversely proportional to the distance to the agent’s goal. Other agents’ goals matrix: if there is a goal of any agent in the cell, – otherwise. Self agent’s goal matrix if the goal is inside the observation field, then there is in the cell, where it is located, and in other cells. If the target does not fall into the field of view, then it is projected onto the nearest cell of the observation field. As a cell for projection, a cell is selected on the border of the visibility area, which either has the same coordinate along with one of the axes as the target cell or if there are no such cells, then the nearest corner cell of the visibility area is selected. An example of an agent observation space is shown in Figure 3.

Figure 3: An example of an observation matrix for a purple agent. In all observation cells, 1 means that there is an object (obstacle, agent, or goal) in this cell and 0 otherwise. a) Environment state. The agent for which the observation is shown is highlighted; b) Obstacle map. The central cell corresponds to the position of the agent, in this map the objects are obstacles; c) Agents map. In this map, the objects are agents; d) Other agents’ goals map. In this map, the objects are the goals of other agents; e) Goal map. In this map, the object is the self-goal of the agent.

Each agent has 5 actions available: stay in place and move vertically (up or down) or horizontally (right or left). An agent can move to any free cell that is not occupied by an obstacle or other agent. If an agent moves to a cell with his own goal, then he is removed from the map and the episode is over for him.

Agents receive a reward of if he follows one of the optimal routes to his goal, if the agent has increased his distance to the target and if the agent stays in place.

5 Experiments

This section compares QMIX with the Proximal Policy Optimization (PPO), single-agent reinforcement learning algorithm [schulman2017proximal]. We chose PPO because it showed better results in a multi-agent setting compared to other modern reinforcement learning algorithms. Also, this algorithm significantly outperformed other algorithms, including QMIX, in the single-agent case.

The algorithms were trained on environments with the following parameters: , , , , . As the main network architecture of each of the algorithms, we used architecture with two hidden layers of neurons, with ReLU activation function for QMIX and Tanh for PPO. We trained each of the algorithms using 1.5M steps in the environment.

Figure 4: The graphs show separate curves for different environment sizes. For environment sizes ; ; , we used 2, 6, 16 agents, respectively. The left graph shows the success rate for the QMIX algorithm in random environments. The right graph shows the success rate for the QMIX algorithm in complex environments.

The results of training of the QMIX algorithm are shown in Figure 4. We evaluated the algorithm for every step on a set of unseen environments. The evaluation set was fixed throughout the training. This figure also shows evaluation curves for complex environments. We generated a set of complex environments so that agents needed to choose actions cooperatively, avoiding deadlocks. An example of complex environments for a environment size of is shown in Figure 5. This series of experiments aimed to test the ability of QMIX agents to act sub-optimally for a greedy policy, but optimal for a cooperative policy.

Figure 5: Examples of complex environments where agents need to use a cooperative policy to reach their goals. In all examples, the optimal paths of agents to their goals intersect, and one of them must give way to the other. Vertices that are not visible to agents are shown as transparent.

The results of the evaluation are shown in Table 1 for random environments and in Table 2 for complex environments. As a result of training, the QMIX significantly outperforms the PPO algorithm on both series of experiments, which shows the importance of using the mixing network for training.

PPO QMIX
2 5 16 5 0.3 0.539 0.738
6 5 32 6 0.3 0.614 0.762
16 5 64 8 0.5 0.562 0.659
Table 1: Comparison of the algorithms on a set of 200 environments (for each parameter set) with randomly generated obstacles. The last two columns show the success rate for PPO and QMIX, respectively. The results are averaged over three runs of each algorithm in each environment. QMIX out-performs PPO due to the use of the mixing Q-function .
PPO QMIX
2 5 16 5 0.3 0.454 0.614
6 5 32 6 0.3 0.541 0.66
16 5 64 8 0.5 0.459 0.529
Table 2: Comparison on a set of 70 environments (for each parameter set) with complex obstacles. The last two columns show the success rate for PPO and QMIX, respectively. The results are averaged over ten runs of each algorithm in each environment. QMIX, as in the previous experiment, out-performs PPO.

6 Conclusion

In this work, we considered the problem of multi-agent pathfinding in the partially observable environment. Applying state-of-the-art centralized methods that construct joint plan for all agents is not possible in this setting. Instead we rely on reinforcement learning to learn the mapping from agent’s observations to actions. To learn cooperative behavior we adopted a mixing Q-network (neural network approximator), which selects the parameters of a unifying Q-function that combines the Q-functions of the individual agents. An experimental environment was developed for launching experiments with learning algorithms. In this environment, the efficiency of the proposed method was demonstrated and its ability to outperform the state-of-the-art on-policy reinforcement learning algorithm (PPO). It should be noted that the comparison was carried out under conditions of limiting the number of episodes of interaction between the agent and the environment. If such a sample efficiency constraint is removed, the on-policy method can outperform the proposed off-policy Q-mixing network algorithm. In future work, we plan to combine the advantages of better-targeted behavior generated by the on-policy method and the ability to take into account the actions of the other agents when resolving local conflicts using QMIX. The model-based reinforcement learning approach seems promising, in which it is possible to plan and predict the behavior of other agents and objects in the environment [Schrittwieser2021, Gorodetskiy2020]. We also assume that using adaptive task composition for agent training (curriculum learning) will also give a significant performance boost for tasks with a large number of agents.

References