Modeling multi-agent interactions is essential for understanding the world. The physical world is governed by (relatively) well-understood multi-agent interactions including fundamental forces (e.g. gravitational attraction, electrostatic interactions) as well as more macroscopic phenomena (electrical conductors and insulators, astrophysics). The social world is also governed by multi-agent interactions (e.g. psychology and economics) which are often imperfectly understood. Games such as Chess or Go have simple and well defined rules but move dynamics are governed by very complex policies. Modeling and inference of multi-agent interaction from observational data is therefore an important step towards machine intelligence.
. These problems usually have temporal and/or spatial structure, which makes them amenable to particular neural architectures - Convolutional and Recurrent Neural Networks (CNNlecun1989backpropagation and RNN hochreiter1997long). Multi-agent interactions are different from machine perception in several ways:
The data is no longer sampled on a spatial or temporal grid.
The number of agents changes frequently.
Systems are quite heterogeneous, there is not a canonical large network that can be used for finetuning.
Multi-agent systems have an obvious factorization (into point agents), whereas signals such as images and speech do not.
To model simple interactions in a physics simulation context, Interaction Networks (INs) were proposed by Battaglia et al. battaglia2016interaction
. Interaction networks model each interaction in the physical interaction graph (e.g. force between every two gravitating bodies) by a neural network. By the additive sum of the vector outputs of all the interactions, a global interaction vector is obtained. The global interaction alongside object features are then used to predict the future velocity of the object. It was shown that Interaction Networks can be trained for different numbers of physical agents and generate accurate results for simple physical scenarios in which the nature of the interaction is additive and binary (i.e. pairwise interaction between two agents) and while the number of agents is small.
Although Interaction Networks are suitable for the physical domain for which they were introduced, they have significant drawbacks that prevent them from being efficiently extensible to general multi-agent interaction scenarios. The network complexity is where is the number of objects and is the typical interaction clique size. Fundamental physics interactions simulated by the method have , resulting in a quadratic dependence and higher order interactions become completely unmanageable. In Social LSTM alahi2016social, this was remedied by pooling a local neighborhood of interactions. The solution however cannot work for scenarios with long-range interactions. Another solution offered by Battaglia et al. battaglia2016interaction
is to add several fully connected layers modeling the high-order interactions. This approach struggles when the objective is to select one of the agents (e.g. which agent will move), as it results in a distributed representation and loses the structure of the problem.
In this work we present VAIN (Vertex Attention Interaction Network), a novel multi-agent attentional neural network for predictive modeling. VAIN’s attention mechanism helps with modeling the locality of interactions and improves performance by determining which agents will share information. VAIN can be said to be a CommNet sukhbaatar2016learning with a novel attention mechanism or a factorized Interaction Network battaglia2016interaction. This will be made more concrete in Sec. 2. We show that VAIN can model high-order interactions with linear complexity in the number of vertexes while preserving the structure of the problem, this has lower complexity than IN in cases where there are many fewer vertexes than edges (in many cases linear vs quadratic in the number of agents).
For evaluation we introduce two non-physical tasks which more closely resemble real-world and game-playing multi-agent predictive modeling, as well as a physical Bouncing Balls task. Our non-physical tasks are taken from Chess and Soccer and contain different types of interactions and different data regimes. The interaction graph on these tasks is not known apriori, as is typical in nature.
An informal analysis of our architecture is presented in Sec. 2. Our method is presented in Sec. 3. Description of our experimental evaluation scenarios are presented in Sec. 4. The results are provided in Sec. 5. Conclusion and future work are presented in Sec. 6.
This work is primarily concerned with learning multi-agent interactions with graph structures. The seminal works in graph neural networks were presented by Scarselli et al. scarselli2009graph; gori2005new and Li et al. li2015gated. Another notable iterative graph-like neural algorithm is the Neural-GPU kaiser2015neural. Notable works in graph NNs includes Spectral Networks bruna2013spectral and work by Duvenaud et al. duvenaud2015convolutional for fingerprinting of chemical molecules.
Two related approaches that learn multi-agent interactions on a graph structure are: Interaction Networks battaglia2016interaction which learn a physical simulation of objects that exhibit binary relations and Communication Networks (CommNets) sukhbaatar2016learning, presented for learning optimal communications between agents. The differences between our approach VAIN and previous approaches INs and CommNets are analyzed in detail in Sec. 2.
Another recent approach is PointNet qi2016pointnet where every point in a point cloud is embedded by a deep neural net, and all embeddings are pooled globally. The resulting descriptor is used for classification and segmentation. Although a related approach, the paper is focused on 3D point clouds rather than multi-agent systems. A different approach is presented by Social LSTM alahi2016social which learns social interaction by jointly training multiple interacting LSTMs. The complexity of that approach is quadratic in the number of agents requiring the use of local pooling that only deals with short range interactions to limit the number of interacting bodies.
The attentional mechanism in VAIN has some connection to Memory Networks weston2014memory; sukhbaatar2015end and Neural Turning Machines graves2014neural
. Other works dealing with multi-agent reinforcement learning includeusunier2016episodic and peng2017multiagent.
There has been much work on board game bots (although the approach of modeling board games as interactions in a multi agent system is new). Approaches include silver2016mastering; tian2015better for Chess, campbell2002deep; lai2015giraffe; david2016deepchess for Backgammons tesauro1990neurogammon for Go.
Concurrent work: We found on Arxiv two concurrent submissions which are relevant to this work. Santoro et al. santoro2017simple discovered that an architecture nearly identical to Interaction Nets achieves excellent performance on the CLEVR dataset johnson2016clevr. We leave a comparison on CLEVR for future work. Vaswani et al. vaswani2017attention use an architecture that bears similarity to VAIN for achieving state-of-the-art performance for machine translation. The differences between our work and Vaswani et al.’s concurrent work are substantial in application and precise details.
2 Factorizing Multi-Agent Interactions
In this section we give an informal analysis of the multi-agent interaction architectures presented by Interaction Networks battaglia2016interaction, CommNets sukhbaatar2016learning and VAIN.
Interaction Networks model each interaction by a neural network. For simplicity of analysis let us restrict the interactions to be of 2nd order. Let be the interaction between agents and , and be the non-interacting features of agent . The output is given by a function of the sum of all of the interactions of , and of the non-interacting features .
A single step evaluation of the output for the entire system requires evaluations of .
An alternative architecture is presented by CommNets, where interactions are not modeled explicitly. Instead an interaction vector is computed for each agent . The output is computed by:
A single step evaluation of the CommNet architecture requires evaluations of . A significant drawback of this representation is not explicitly modeling the interactions and putting the whole burden of modeling on . This can often result in weaker performance (as shown in our experiments).
VAIN’s architecture preserves the complexity advantages of CommNet while addressing its limitations in comparison to IN. Instead of requiring a full network evaluation for every interaction pair it learns a communication vector for each agent and additionally an attention vector . The strength of interaction between agents is modulated by kernel function . The interaction is approximated by:
The output is given by:
In cases where the kernel function is a good approximation for the relative strength of interaction (in some high-dimensional linear space), VAIN presents an efficient linear approximation for IN which preserves CommNet’s complexity in .
Although physical interactions are often additive, many other interesting cases (Games, Social, Team Play) are not additive. In such cases the average instead the sum of should be used (in battaglia2016interaction only physical scenarios were presented and therefore the sum was always used, whereas in sukhbaatar2016learning only non-physical cases were considered and therefore only averaging was used). In non-additive cases VAIN uses a softmax:
3 Model Architecture
In this section we model the interaction between agents denoted by …. The output can be either be a prediction for every agent or a system-level prediction (e.g. predict which agent will act next). Although it is possible to use multiple hops, our presentation here only uses a single hop (and they did not help in our experiments).
Features are extracted for every agent and we denote the features by . The features are guided by basic domain knowledge (such as agent type or position).
We use two agent encoding functions: i) a singleton encoder for single-agent features ii) A communication encoder for interaction with other agents . The singleton encoding function is applied on all agent features to yield singleton encoding
We define the communication encoding function . The encoding function is applied to all agent features to yield both encoding and attention vector . The attention vector is used for addressing the agents with whom information exchange is sought. is implemented by fully connected neural networks (from now FCNs).
For each agent we compute the pooled feature , the interaction vectors from other agents weighted by attention. We exclude self-interactions by setting the self-interaction weight to :
This is in contrast to the average pooling mechanism used in CommNets and we show that it yields better results. The motivation is to average only information from relevant agents (e.g. nearby or particularly influential agents). The weights give a measure of the interaction between agents. Although naively this operation scales quadratically in the number of agents, it is multiplied by the feature dimension rather by a full evaluation and is therefore significantly smaller than the cost of the (linear number) of calculations carried out by the algorithm. In case the number of agents is very large (>1000) the cost can still be mitigated: The Softmax operation often yields a sparse matrix, in such cases the interaction can be modeled by the K-Nearest neighbors (measured by attention). The calculation is far cheaper than evaluating times as in IN. In cases where even this cheap operation is too expensive we recommend to default to CommNets which truly have an O(N) complexity.
The pooled-feature is concatenated to the original features to form intermediate features :
The features are passed through decoding function which is also implemented by FCNs. The result is denoted by :
For regression problems, is the per-agent output of VAIN. For classification problems,
Several advantages of VAIN over Interaction Networks battaglia2016interaction are apparent:
Representational Power: VAIN does not assume that the interaction graph is pre-specified (in fact the attention weights learn the graph). Pre-specifying the graph structure is advantageous when it is clearly known e.g. spring-systems where locality makes a significant difference. In many multi-agent scenarios the graph structure is not known apriori. Multiple-hops can give VAIN the potential to model higher-order interactions than IN, although this was not found to be advantageous in our experiments.
Complexity: As explained in Sec. 2, VAIN features better complexity than INs. The complexity advantage increases with the order of interaction.
We presented VAIN, an efficient attentional model for predictive modeling of multi-agent interactions. In this section we show that our model achieves better results than competing methods while having a lower computational complexity.
We perform experiments on tasks from two different multi-agent domains to highlight the utility and generality of VAIN: chess move and soccer player prediction.
Chess Piece Prediction
Chess is a board game involving complex multi-agent interactions. There are several properties of chess that make it particularly difficult from a multi-agent perspective:
There are 12 different types of agents with distinct behaviors.
It has a well defined goal and near-optimal policies in professional games.
Many of the interactions are non-local and very long ranged.
At any given time there are multiple pieces interacting in a high-order clique (e.g. blocks, multiple defenders and attackers).
In this experiment we do not attempt to create an optimal chess player. Rather, we are given a board position from a professional game. Our task is to to identify the piece that will move next (MPP). Although we envisage that deep CNNs will achieve the best performance on this task, our objective here is to use chess as a test-bed for multi-agent interactive system predictors using only simple features for every agent. For recent attempts at building an optimal neural chess player please refer to lai2015giraffe; david2016deepchess. The position illustrates the challenges of chess: non-local interactions, large variety of agents, blockers, hidden and implied threats, very high order interactions (here there’s a clique between pawn, rook, queen, bishop etc.).
There are 12 categories of piece types in chess, where the category is formed of the combination of piece type and color. There are 6 types: Pawn, Rook, Knight, Bishop, Queen and King and two colors: Black and White. A chess board consists of 64 squares (organized in 8 rows and 8 columns). Every piece is of one category and is situated at a particular board square (, ). All methods evaluated on this task use the features: ( for all pieces on the board ). The output is the piece position in the input (so if the input is (12,7,7), (11,6,5), … output label would mean that piece with features (11,6,5) will move next). There are 32 possible input pieces, in the case that fewer than 32 pieces are present, the missing pieces are given feature values (0, 0, 0).
For training and evaluation of this task we downloaded 10k games from the FICS Games Dataset, an on-line repository of chess games. All the games used are standard games between professionally ranked players. 9k randomly sampled games were used for training, and the remaining 1k games for evaluation. Moves later in the game than 100 (i.e. 50 Black and 50 White moves), were dropped from the dataset so as not to bias it towards particularly long games. The total number of examples is around 600k.
We use the following methods for evaluation:
: Random piece selection.
: A standard FCN with three hidden layers (Node numbers: Input - 32 * (13 + 16), 64, 64, 32). The input is the one-hot encoding of the features of each of the 32 pieces, the output is the index of the output agent. This method requires indexing to be learned.
: Per-piece embedding neural network with scalar output. The outputs from all input pieces are fed to a softmax classifier predicting the output label. Note that this method preserves the structure of the problem, but does not model high-order interactions.
: A one-hop network followed by a deep (3 layers) classifier. The classifier predicts the label of the next moving piece (1-32). Note that the deep classifier removes the structure of the problem. The classifier therefore has to learn to index.
: A standard CommNet (no attention) sukhbaatar2016learning. The protocol for CommNet is the same as VAIN.
: An Interaction Network followed by Softmax (as for VAIN). Inference for this IN required around 8 times more computation than VAIN and CommNet.
Team-player interaction is a promising application area for end-to-end multi-agent modeling as the rules of sports interaction are quite complex and not easily formulated by hand-coded rules. An additional advantage is that predictive modeling can be self-supervised and no labeled data is necessary. In team-play situations many agents may be present and interacting at the same time making the complexity of the method critical for its application.
In order to evaluate the performance of VAIN on team-play interactions, we use the Soccer Video and Player Position Dataset (SVPP) pettersen2014soccer. The SVPP dataset contains the parameters of soccer players tracked during two home matches played by Tromsø IL, a Norwegian soccer team. The sensors were positioned on each home team player, and recorded the player’s location, heading direction and movement velocity (as well as other parameters that we did not use in this work). The data was re-sampled by pettersen2014soccer to occur at regular 20 Hz intervals. We further subsampled the data to 2 Hz. We only use sensor data rather than raw-pixels. End-to-end inference from raw-pixel data is left to future work.
The task that we use for evaluation is predicting from the current state of all players, the position of each player for each time-step during the next 4 seconds (i.e. at , … ). Note that for this task, we just use a single frame rather than several previous frames, and therefore do not use RNN encoders for this task.
We evaluated several methods on this task:
: trivial prediction of 0-motion.
: Linearly extrapolating the agent displacement by the current linear velocity.
: A linear regressor predicting the agent’s velocity using all features including the velocity, but also the agent’s heading direction and most significantly the agent’s current field position.
: a predictive model using all the above features but using three fully-connected layers (with 256, 256 and 16 nodes).
: A standard CommNet (no attention) sukhbaatar2016learning. The protocol for CommNet is the same as VAIN.
: An Interaction Network battaglia2016interaction. This results in network evaluations.
We excluded the second half of the Anzhi match due to large sensor errors for some of the players (occasional 60m position changes in 1-2 seconds).
Following Battaglia et al. battaglia2016interaction, we present a simple physics-based experiment. In this scenario, balls are bouncing inside a 2D square container of size . There are identical balls (we use ) which are of constant size and are perfectly elastic. The balls are initialized at random positions and with initial velocities sampled at random from (we use ). The balls collide with other balls and with the walls, where the collisions are governed by the laws of elastic collisions. The task which we evaluate is the prediction of the displacement and change in velocity of each ball in the next time step. We evaluate the prediction accuracy of our method as well as Interaction Networks battaglia2016interaction and CommNets sukhbaatar2016learning. We found it useful to replace VAIN’s attention mechanism by an unnormalized attention function due to the additive nature of physical forces:
Soccer: The encoding and decoding functions , and
were implemented by fully-connected neural networks with two layers, each of 256 hidden units and with ReLU activations. The encoder outputs had 128 units. For IN each layer was followed by a BatchNorm layer (otherwise the system converged slowly to a worse minimum). For VAIN no BatchNorm layers were used.Chess: The encoding and decoding functions and were implemented by fully-connected neural networks with three layers, each of width 64 and with ReLU activations. They were followed by BatchNorm layers for both IN and VAIN. Bouncing Balls: The encoding and decoding function , and were implemented with FCNs with 256 hidden units and three layer. The encoder outputs had 128 units. No BatchNorm units were used. For Soccer, and architectures for VAIN and IN was the same. For Chess we evaluate INs with being 4 times smaller than for VAIN, this still takes 8 times as much computation as used by VAIN. For Bouncing Balls the computation budget was balanced between VAIN and IN by decreasing the number of hidden units in for IN by a constant factor.
In all scenarios the attention vector is of dimension 10 and shared features with the encoding vectors . Regression problems were trained with
loss, and classification problems were trained with cross-entropy loss. All methods were implemented in PyTorchpytorch in a Linux environment. End-to-end optimization was carried out using ADAM kingma2014adam with and no
regularization was used. The learning rate was halved every 10 epochs. The chess prediction training for the MPP took several hours on a K80 GPU, other tasks had shorter training times due to smaller datasets.
Let us first look at the attention maps generated by VAIN for our experimental scenarios. This visualization serves as a tool for understanding the nature of interactions between the agents. Note that VAIN only receives feedback on its future prediction but never receives explicit supervision on the nature of interaction between the agents.
Bouncing Balls: In Fig. 3 we can observe the attention maps for two different balls in the Bouncing Balls scenario. The position of the ball is represented by a circle. The velocity of each ball is indicated by a line extending from the center of the circle, the length of the line is proportional to the speed of the ball. For each figure we choose a target ball , and paint it blue. The attention strength of each agent with respect to is indicated by the shade of the circle. The brighter the circle, the stronger the attention. In the first scenario we observe that the two balls near the target receive attention whereas other balls are suppressed. This shows that the system exploits the sparsity due to locality inherent in this multi-agent system. In the second scenario we observe, that the ball on collision course with the target receives much stronger attention, relative to a ball that it much closer to the target but is not likely to collide with it. This indicates VAIN learns important attention features beyond the simple positional hand-crafted features typically used.
Soccer: A few visualizations of the Soccer scenario can be seen in Fig. 4. The positions of the players are indicated by green circles, apart from a target player (chosen by us), that is indicated by a blue circle. The brightness of each circle is chosen to be proportional to the strength of attention between each player and the target player. Arrows are proportional to player velocity. We can see in this scenario that the attention to nearest players (attackers to attackers, midfielder to midfielders) is strongest, but attention is given to all field players. The goal keeper normally receives no attention (due to being far away, and in normal situations not affecting play). This is an example of mean-field rather than sparse attention.
Chess: For the Chess scenario, the attention maps were not easily interpretable. We think this is due to the interactions in Chess being complex and high-order. The main visible trend was stronger attention to important and nearby pieces.
Chess MPP The results for next moving chess piece prediction can be seen in Table. 1. Our method clearly outperforms the competing baselines illustrating that VAIN is effective at selection type problems - i.e. selecting 1 - of- agents according to some criterion (in this case likelihood to move). The non-interactive method performs much better than (+9%) due to use of statistics of moves. Interactive methods (, , , and ) naturally perform better as the interactions between pieces are important for deciding the next mover. It is interesting that the simple method performs better than (+3%), we think this is because the classifier in finds it hard to recover the indexes after the average pooling layer. This shows that one-hop networks followed by fully connected classifiers (such as the original formulation of Interaction Networks) struggle at selection-type problems. Our method performs much better than (11.5%) due to the per-vertex outputs , and coupling between agents. also performs significantly better than (+8.5%) as it does not have to learn indexing. It outperforms vanilla CommNet by 2.9%, showing the advantages of our attentional mechanism. It also outperforms INs followed by a per-agent Softmax (similarly to the formulation for VAIN) by 1.8% even though the IN performs around 8 times more computation than VAIN.
Soccer We evaluated our methods on the SVPP dataset. The prediction errors in Table. 2 are broken down for different time-steps and for different train / test datasets splits. It can be seen that the non-interactive baselines generally fare poorly on this task as the general configuration of agents is informative for the motion of agents beyond a simple extrapolation of motion. Examples of patterns than can be picked up include: running back to the goal to help the defenders, running up to the other team’s goal area to join an attack. A linear model including all the features performs better than a velocity only model (as position is very informative). A non-linear per-player model with all features improves on the linear models. Both the interaction network, CommNet and VAIN significantly outperform the non-interactive methods. VAIN outperformed CommNet and IN, achieving this with only 4% of the number of encoder evaluations performed by IN. This validates our premise that VAIN’s architecture can model object interactions without modeling each interaction explicitly.
The results of our bouncing balls experiments can be seen in Tab. 3. We see that in this physical scenario VAIN significantly outperformed CommNets, and achieves better performance than Interaction Networks for similar computation budgets. In Fig. 5 we see that the difference increases for small computation budgets. The attention mechanism is shown to be critical to the success of the method.
Analysis and Limitations
Our experiments showed that VAIN achieves better performance than other architectures with similar complexity and equivalent performance to higher complexity architectures, mainly due to its attention mechanism. There are two ways in which the attention mechanism implicitly encodes the interactions of the system: i) Sparse: if only a few agents significantly interact with agent , the attention mechanism will highlight these agents (finding spatial nearest neighbors is a special case of such attention). In this case CommNets will fail. ii) Mean-field: if a space can be found where the important interactions act in an additive way, (e.g. in soccer team dynamics scenario), attention would find the correct weights for the mean field. In this case CommNets would work, but VAIN can still improve on them.
VAIN is less well-suited for cases where both: interactions are not sparse such that the K most important interactions will not give a good representation and where the interactions are strong and highly non-linear so that a mean-field approximation is non-trivial. One such scenario is the body gravitation problem. Interaction Networks are particularly well suited for this scenario and VAIN’s factorization will not yield an advantage.
6 Conclusion and Future Work
We have shown that VAIN, a novel architecture for factorizing interaction graphs, is effective for predictive modeling of multi-agent systems with a linear number of neural network encoder evaluations. We analyzed how our architecture relates to Interaction Networks and CommNets. Examples were shown where our approach learned some of the rules of the multi-agent system. An interesting future direction to pursue is interpreting the rules of the game in symbolic form, from VAIN’s attention maps . Initial experiments that we performed have shown that some chess rules can be learned (movement of pieces, relative values of pieces), but further research is required.
We thank Rob Fergus for significant contributions to this work. We also thank Gabriel Synnaeve and Arthur Szlam for fruitful comments on the manuscript.