GCNNMatch: Graph Convolutional Neural Networks for Multi-Object Tracking via Sinkhorn Normalization

This paper proposes a novel method for online Multi-Object Tracking (MOT) using Graph Convolutional Neural Network (GCNN) based feature extraction and end-to-end feature matching for object association. The Graph based approach incorporates both appearance and geometry of objects at past frames as well as the current frame into the task of feature learning. This new paradigm enables the network to leverage the "context" information of the geometry of objects and allows us to model the interactions among the features of multiple objects. Another central innovation of our proposed framework is the use of the Sinkhorn algorithm for end-to-end learning of the associations among objects during model training. The network is trained to predict object associations by taking into account constraints specific to the MOT task. Experimental results demonstrate the efficacy of the proposed approach in achieving top performance on the MOT16 17 Challenge problems among state-of-the-art online and supervised approaches.



page 2

page 4

page 7

page 8


TGCN: Time Domain Graph Convolutional Network for Multiple Objects Tracking

Multiple object tracking is to give each object an id in the video. The ...

Graph Neural Based End-to-end Data Association Framework for Online Multiple-Object Tracking

In this work, we present an end-to-end framework to settle data associat...

Once for All: a Two-flow Convolutional Neural Network for Visual Tracking

One of the main challenges of visual object tracking comes from the arbi...

Learnable Graph Matching: Incorporating Graph Partitioning with Deep Feature Learning for Multiple Object Tracking

Data association across frames is at the core of Multiple Object Trackin...

Visual Object Tracking by Segmentation with Graph Convolutional Network

Segmentation-based tracking has been actively studied in computer vision...

TrackMPNN: A Message Passing Graph Neural Architecture for Multi-Object Tracking

This study follows many previous approaches to multi-object tracking (MO...

Factor Graph based 3D Multi-Object Tracking in Point Clouds

Accurate and reliable tracking of multiple moving objects in 3D space is...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multi-object tracking (MOT) is a widely studied computer vision problem of tracking multiple objects across video frames

[5], that has several applications including autonomous vehicles, robot navigation, medical imaging, and visual surveillance. One of the major paradigms in MOT is the tracking-by-detection paradigm, where an object detector is first used to extract object locations at each frame separately, followed by a tracker which associates detected objects across frames. The goal of the tracker is to solve the bipartite graph matching problem, where every object instance in a past frame is associated to at most one object instance in the current frame, using pair-wise object affinities. There are two variants of the matching problem considered in MOT: online matching, where objects are associated only using past frames, and offline matching, where information from both past and future frames are used to track a given object. In this work, we only focus on the MOT problem involving online matching.

One of the conventional approaches in online matching is to learn appearance similarity functions among pairs of objects across consecutive frames through the use of Siamese Convolutional Neural Networks (CNN) architectures during training, e.g., using pairwise loss [18, 17] and triplet loss[35]. However, these approaches treat feature extraction and object association as two isolated tasks and only deal with the optimization aspect of object association during testing using traditional algorithms such as the Hungarian [15] that leads to inferior accuracy. Another limitation is that these methods do not take into account the relative locations of objects during feature learning.

Recently there have been attempts to merge the feature extraction and object association tasks using Graph Neural Networks (GNN) [38, 2, 36, 19], that have achieved state-of-the-art performance on benchmark MOT problems [2]. These approaches take advantage of the graph nature of the problem by using CNN to learn features and GNN to associate objects. By embedding appearance and geometric information into the graph structure, these approaches allow object features to be learned while taking into account object interactions in the network. Previous MOT approaches based on GNN [23, 2, 11, 36]

have attempted to satisfy bipartite one-to-one matching constraints using loss functions such as cross-entropy loss in an end-to-end architecture. However, as demonstrated in

[2], they do not always enforce these constraints accurately, leaving room for improvement in performance. This is especially true for the online matching problem where GNN based methods are known to show poor performance [11, 19, 22].

Figure 1: Illustration of the main components of our proposed approach. Historic object instances are matched with current frame detections, allowing objects to enter and exit the scene. Appearance and interaction features are used to produce similarity scores and to derive the final association using the Sinkhorn algorithm.

In this paper, we propose a CNN and Graph Convolutional Neural Network (GCNN) based approach for MOT, depicted in Figure 1, to accurately solve the online matching problem subject to constraints specific to the MOT task. In our proposed approach, we model each object as a tracklet and feasible connections between tracklets from previous frames and new detections at the current frame form the edges of the graph. A CNN extracts appearance features of the tracklets and a GCNN updates these features through the interaction of the nodes (tracklets) based on their connectivity. Finally, a Sinkhorn based normalization is applied to enforce the MOT constraints of the bipartite matching problem. Here is a summary of our contributions:

  • We propose an online tracking method based on graph convolutional neural networks that achieves top performance in comparison to existing online and supervised approaches on the MOT16 & 17 benchmarks.

  • In contrast to traditional MOT approaches that learn appearance features of every object separately using Siamese architectures, our proposed approach operates on an arbitrarily large neighborhood of objects, incorporating context information such as location and object sizes using GCNN.

  • While previous GNN based approaches use loss functions to satisfy bipartite matching constraints, we introduce a novel approach of using the Sinkhorn normalization to enforce those constraints, reducing the number of Identity Switches and False Negatives as demonstrated in our empirical results.

  • In contrast to other GNN based approaches for MOT, we use the geometric information not only during graph edge construction but also during affinity computation, thus significantly improving accuracy.

The remainder of the paper is organized as follows. Section 2 describes related work. Section 3 describes our proposed approach. Section 4 describes our evaluation setup and experimental results, while Section 5 provides concluding remarks.

2 Related Work

2.1 Multi-Object Tracking

A majority of previous work in MOT is based on the paradigm of tracking-by-detection [5], which comprises of three basic stages. In the first stage of detection, objects are identified at every frame using bounding boxes. In the next stage of feature extraction, feature extraction methods are applied on the detected objects to extract appearance, motion and other interaction features of the objects, which are then used to compute similarity or affinity scores among object pairs. In the final stage of association, an assignment problem is solved to match objects at previous frames with objects at the current frame.

For feature extraction, a number of methods have been introduced for appearance feature extraction, including deep learning methods such as Siamese Networks

[16, 14, 18], auto-encoders [7, 9], correlation filters [41, 13], feature pyramids [18], and spatial attention [44]

. Motion extraction has also been an integral part of tracking and a number of methods have been developed utilizing Kalman Filters

[37], optical flow [34], LSTM[25]

, among others. A number of methods have also been developed for computing pair-wise affinities. Common techniques include the use of metrics such as Intersection over Union and cosine similarity, LSTM variants (e.g., bi-directional

[42], bilinear [12], and Siamese [21]

), and multi-layer perceptrons. The final task of association is commonly handled using a number of approaches such as the Hungarian algorithm

[37], multiple hypothesis tracking [3], dynamic programming [33], and lifted multi-cut [32]

. Recent examples of methods for assignment using deep learning include reinforcement learning

[29] and the use of graphs [22].

Despite significant developments in the field of MOT, there is still a large margin for improving performance especially in terms of the number of identity switches—a critical aspect of tracking performance. One of the limitations of aforementioned approaches for MOT is that they perform feature learning without incorporating the geometric context of the features. As demonstrated in some recent approaches [22, 2], incorporating the relative appearance and geometry of objects and allowing them to interact has the potential to create stronger matches and provide more robust associations, therefore reducing the switches.

2.2 Graph Neural Network based Tracking

In an effort to incorporate object interactions during tracking as well as to combine the steps of feature learning and matching, Graph Neural Networks have recently been introduced for tracking. For example, a GCNN was used to update node features in [23], where the nodes are individual detections at every frame. After the GCNN updates, an adjacency matrix was computed using the cosine similarity of node features in the embedding space, which was then used to assign detections to existing tracklets or create new tracklets. Another approach proposed in [2] uses Message Passing Networks to perform edge-based binary label propagations over the graph of detections. In another work by Jiang et al. [11], a method was proposed to learn both an appearance model using two frames (similar to a Siamese network) and a geometry model using LSTM. The assignment task was solved using a GNN trained using three loss functions, one for binary classification, one for multi-class classification, and another one for birth or death of tracks. In Li et al. [19], the authors propose using two GNNs, one for learning appearance features and another for learning motion features.

While GNN based methods have a lot of promise for MOT, existing approaches have yet to become as accurate and robust as compared to other baselines, especially in the task of online tracking. We posit that one of the reasons for the limited accuracy of GNN based methods is that they satisfy the constraints of bipartite matching only using training losses or dedicated neural networks, while there may be other superior approaches for satisfying MOT constraints exactly during tracking and association. For example, as demonstrated in a related problem of point matching [31], the Sinkhorn algorithm is effective in ensuring constraint satisfaction and can be employed both during training and testing, as compared to conventional association algorithms such as the Hungarian method that can only be invoked during testing. In contrast to existing GNN based methods, the use of the Sinkhorn algorithm during association is one of the key innovations of our proposed approach.

Figure 2: Overview of our proposed approach. Given an input set of bounding box images of tracklets and detections (nodes), we first extract appearance features using a CNN, which are used as node features in the GCNN. The appearance features, along with the geometric features , are then concatenated at node pairs and fed to a function to compute edge features . These node and edge features are used by GCNN to produce interaction features at every node. Using cosine similarity and IoU, computes the similarity scores among pairs of tracklets and detections. The Sinkhorn algorithm then normalizes to match MOT constraints and produces the association matrix output . During testing, the Hungarian algorithm is used for converting the values in to binary using a threshold.

3 Proposed Approach

3.1 Problem Statement

We are given a set of detections at a current frame as and a set of historic objects (or tracklets) as . We are also given bounding box images to represent the detections and tracklets. Note that while the bounding box images for detections correspond to the current frame

, the images for tracklets correspond to time-points when they were last observed in past frames. Furthermore, apart from the bounding box images, we also have information about the geometric features of every detection and tracklet, represented as a 4-length vector,

, comprising of the bounding box center’s horizontal position (), vertical position (), box’s width (), and height ().

On the training set, we are given ground-truth labels for the association between detection and tracklet represented as , which can either be 1 (match) or 0 (no match). Note that a tracklet can be associated to at most one detection, and it is possible for new detections to appear as well as existing tracklets to disappear at any frame. The goal of MOT then is to learn a model that can predict the association between detection and tracklet as across all time-points. In other words, we want to learn the optimal association matrix such that:


The problem of learning can also be viewed as a bipartite graph matching problem, where the graph comprises of nodes and bipartite edges connecting a tracklet node to a detection node if there exists a match between and in . The goal of MOT then is to learn a model to recover the adjacency matrix of the graph given the image appearance features and geometric features of the nodes as well as MOT matching constraints.

3.2 Proposed Approach Overview

Figure 2 provides an overview of our proposed approach that comprises of two basic components. First, we extract features for tracklets and detections using a combination of CNN and GCNN. In particular, we use the CNN to extract appearance features given the bounding box images of tracklets and detections. We also leverage the geometric features at every node, which are concatenated with at pairs of nodes to extract edge features using a fully connected neural network (FC-NN), . The extracted edge features , along with node features are then fed to a GCNN to extract interaction features at every node.

In the second component, we use the extracted features at nodes to compute affinities between every pair of tracklet and detection node as follows. We first compute the cosine similarity between the interaction features at and . We then compute the intersection over union (IoU) of the bounding box areas represented by the geometric features at and . The cosine similarity and IoU are then fed to a FC-NN, to produce a real-valued score representing the affinity between and . These affinity scores are then normalized across rows and columns using the Sinkhorn algorithm to satisfy the MOT constraints and produce the final association matrix

. During testing, the Hungarian algorithm is applied to binarize

using a threshold to produce hard assignments between tracklets and detections. In the following, we provide brief descriptions of the two components of our proposed approach.

3.3 Feature Extraction Component

We use the bounding boxes for tracklets and detections available through publicly available detectors as the set of inputs for feature extraction. We first obtain the cropped image for every bounding box that are fed into a CNN architecture to extract appearance features of tracklets and detections, available as flat high-dimensional vectors. The conventional approach in MOT is to map such high-dimensional vectors to lower-dimensional embeddings using fully connected neural networks (FC-NN), which are then used for classification, re-identification, and many other tasks. However, by only using CNN and FC-NN, this approach does not incorporate the interaction effects between different objects (e.g., detections and tracklets) that are prevalent in MOT. To address this, we consider the goal of extracting interaction features at tracklets and detections using a GCNN architecture instead of FC-NNs.

The inputs to our GCNN architecture consist of node and edge features, where the nodes comprise of tracklets and detections while edges denote bipartite matches between tracklets and detections. The node features at any node is simply the appearance features at . To compute the edge features for a pair of tracklet and detection nodes, we first concatenate the appearance features and geometric features at the pair of nodes and then send them to a FC-NN to produce .

Our GCNN architecture comprises of a number of hidden layers where at every layer , the node and edge features produced at layer

are non-linearly transformed using the neighborhood structure of the graph to produce node and edge features at layer

, namely and , respectively. Note that at layer , and are the input node and edge features, respectively. To understand the update operations at layer , let us denote the adjacency matrix of the graph including self-edges at layer as , where

is an identity matrix. Further, let the degree of the adjacency matrix

be denoted by . The node features are then updated at layer as:


where are learnable weights of the GCNN. Once has been updated, the edge features for an edge between nodes and are updated as:


where is a FC-NN. The node features produced at the final layer are termed as the interaction features, .

3.4 Association Component

We use the interaction features extracted by GCNN along with the geometric features to compute the affinity of a tracklet to be associated with a detection using two simple metrics. First, we compute the cosine similarity between the features at and to capture any interaction effects between the two objects discovered by the GCNN. Second, given the importance of the geometric features of and in determining their association affinity, we further compute the intersection over union (IoU) of the bounding boxes of the two objects. This is different from existing GNN based approaches for MOT that only use the geometric information of objects during graph edge construction but not during affinity computation, thus making incomplete use of the information available in geometric features. Note that a higher value of IoU indicates a higher affinity score. We feed the cosine similarity score and IoU score to another FC-NN, , that produces the affinity score, .

Note that the affinity matrix S is constructed in such a way that each element represents the assignment score of tracklet

to detection . Since detections might not be associated with any tracklet and vice versa (denoting births and deaths of objects), we augment by adding a vector of rows and columns at the end of the matrix to produce a new of size . Further, note that the optimal is subject to the following MOT constraints:


Further, at the last row and column of , we can further regularize ([28, 31]) using the following MOT constraints:


We initialize using a default value of . The conventional approach for satisfying MOT constraints (Equations 4, 5, 6, and 7) is to make use of specialized loss functions that can only be applied during training. In contrast, we leverage the Sinkhorn algorithm to automatically satisfy our MOT constraints both during training and testing, by iteratively normalizing the rows and columns of without the need for specialized loss functions. Each element is transformed using:


where is a hyper-parameter representing the entropic regularization effect (larger value of generates greater separation in ). After a fixed number of iterations, the Sinkhorn algorithm produces the final association matrix , where we drop the last row and column. Apart from satisfying the MOT constraints, an additional advantage of the Sinkhorn algorithm is that it is fully differentiable at every iteration. We can thus feed directly to the objective function of the end-to-end learning framework of our proposed approach, that involves minimizing the following weighted binary cross-entropy loss:


where is a weight hyper-parameter to balance the imbalance among 1’s and 0’s in the ground-truth labels . During testing, is first binarized using a cut-off threshold and then the Hungarian method is applied over to perform hard assignments of 0 or 1.

4 Experimental Analysis

4.1 Datasets

We evaluate our proposed approach on the publicly available MOT challenge datasets [24] that serve as a benchmark for comparing MOT performance of state-of-the-art methods using a standardized leader-board. We specifically focus on MOT16 and MOT17 challenge datasets that include annotations of detected objects including pedestrians in urban environments and have been widely used in the MOT community. MOT16 and ’17 contain 7 train and 7 test sequences, each containing 525 to 1,050 frames spanning diverse real-world environments. While both datasets cover the same videos, they differ in their provided detections as MOT16 detections are obtained from DPM[6] while MOT17 provides additional detections from FASTER RCNN[30] and SDP[40]. Also ’17 has more accurate ground truth annotations for tracking than ’16. From each of the provided datasets, the train set is split into training and validation sequences by holding off the last 150 frames of each video for validation. After training the model, we apply our proposed model on the test set using the online evaluation server [24].

4.2 Evaluation Metrics

We consider standard metrics used in MOT literature and reported on the MOT challenge leader-boards including the Multi-Object Tracking Accuracy (MOTA), Identity F1 score (IDF1), Mostly Tracked objects (MT, the ratio of ground-truth trajectories that are correctly predicted by at least 80%), Mostly Lost objects (ML, the ratio of ground-truth objects that are correctly predicted at most 20%), False Positives (FP), False Negatives (FN), ID Switches (ID Sw.) and the frames per second in runtime (Hz) [24].

4.3 Implementation of Proposed Approach


We used DenseNet-121 [10]

as our choice of the CNN architecture with all fully connected layers at the end of the network replaced by GCNN. All activation functions used in the Network are ReLU. Also, all FC-NNs used as metric learner functions in our proposed approach consist of a simple architecture with no hidden layers. The GCNN consists of two hidden layers and in the output space of GCNN,

, no activation function is used as the hidden layers contain sufficient non-linearity. The dimensionality of and is 1,024 and 128, respectively, while the cropped bounding box image size is pixels. The slack variable was set at 0.2, while the regularization parameter was set at 5, and the number of Sinkhorn iterations set at 8. The weight hyper-parameter is set to 10 while

is set to 0.2. All codes were developed in Pytorch

[27] and Pytorch-Geometric[8].

Training Setup.

We trained and tested our model on an Intel 2.6 GHz CPU cluster with NVIDIA TITAN RTX GPUs. The learning rate is set at and regularization parameter for Adam optimizer. Batch size is set to 12. Also, at each frame during training, we sample a random frame as previous frame going back up to 45 frames in order to provide more challenging matches. This introduces more cases of occlusion and significant appearance changes making our algorithm more robust during testing.


Public detections in MOT16 and ’17 datasets are noisy and have many missing objects. Tracktor[1] is considered as a baseline approach that can partially alleviate the problem of missing and false detections. Similar to [2], we adopt the same processed detections upon which the Tracktor has been applied. Specifically, all detections with a confidence threshold less than 0.5 are ignored. For the remaining ones, Tracktor propagates a bounding box from the previous frame to the next by placing it into the same position and performing regression using the FRCNN regression head. Remaining detections are pruned using an NMS threshold of 0.8 and used for matching in our algorithm. To further reduce false positives, a pruning approach is followed in which an object needs to appear more than 2 times in the last 15 frames since it was first observed to remain as an active tracklet. Finally, feasible matches are only created if the candidate objects fall within a pixel distance of 200.

(a) Video: 03, Fr: 542, Method: Ours
(b) Video: 03, Fr: 570, Method: Ours
(c) Video: 12, Fr: 556, Method: Ours
(d) Video: 12, Fr: 583, Method: Ours
(e) Video: 03, Fr: 542, Method: Tracktor-v2
(f) Video: 03, Fr: 570, Method: Tracktor-v2
(g) Video: 12, Fr: 556, Method: GSM-Tracktor
(h) Video: 12, Fr: 583, Method:GSM-Tracktor
Figure 3: Qualitative analysis on MOT17-test set showcasing the accuracy in predicting object identities after occlusion. Each approach is shown for two frames. Each object has a colored box and an augmented number indicating its identity. In the first row, our method performance is shown for two frames (Fr) of two different videos (03 and 12). In the second row, a comparison is made using the same frames but with Tracktor-v2 [1] and GSM-Tracktor [22]. Images obtained from [26].

4.4 Results

(a) Video: 05, Fr.: 317, Ablation: FC-NN
(b) Video: 05, Fr.: 320, Ablation: FC-NN
(c) Video: 10, Fr.: 25, Ablation: Appear. only
(d) Video: 10, Fr.: 30, Ablation: Appear. only
(e) Video: 05, Fr.: 317, Ablation: GCNN
(f) Video: 05, Fr.: 320, Ablation: GCNN
(g) Video: 10, Fr.: 25, Ablation: Appear. & Geom.
(h) Video: 10, Fr.: 30, Ablation: Appear. & Geom.
Figure 4: Qualitative analysis of performance on the MOT17-train set using different architectures during ablation study. Each object identity is illustrated using the drawn numbers inside each bounding box. In the first two columns, a comparison is performed using the GCNN and FC-NN based architectures. In the second two columns, the effect of the appearance and geometric features is examined.


MOT 2016




Ours 56.9 55.9 169 268 3235 74784 564 1.3
GSM-Tracktor[22] 57.0 58.2 167 262 4332 73573 475 7.6
Tracktor-v2[1] 56.2 54.9 157 272 2394 76844 617 1.6
TrctrD16[39] 54.8 53.4 145 281 2955 78.765 645 1.6
MLT[43] 52.8 62.6 160 322 5362 80444 299 5.9
PV[20] 50.4 50.8 113 295 2600 86780 1061 7.3
GNMOT[19] 47.7 43.2 120 260 9518 83875 1907 2


MOT 2017




Ours 57.0 56.1 548 815 12283 228242 1957 1.3
GSM-Tracktor[22] 56.4 57.8 523 813 14379 230174 1485 8.7
Tracktor-v2[1] 56.3 55.1 498 831 8866 235449 1987 1.5
TrctrD17[39] 53.7 53.8 458 861 11731 247447 1947 4.9
FAMNet[4] 52.0 48.7 450 787 14138 253616 3072 0
GNMOT[19] 50.2 47.0 454 760 29316 246200 5273 -
EDA-GNN[11] 45.5 40.5 368 955 25685 277663 4091 39.3


Table 1: Comparison of our proposed approach with state-of-the-art supervised online trackers that use public detections.

4.4.1 Benchmark Evaluation

Table 1 shows a comparison of the performance of our proposed approach with that of top-performing online supervised approaches on MOT16 and MOT17 leader-boards that use public detections. On the MOT16 dataset, our method produces the highest number of MT objects and surpasses the state-of-the-art baseline Tracktor-v2 by 0.7% MOTA and 1% IDF1. It also achieves better results than other tracking algorithms (trackers) such as TrctrD16 [39], MLT [43], PV [20] and the GNN-based GNMOT [19], with MOTAs that range from 54.8 to 47.7%, while ours is at 56.9%. This is very much comparable with the best MOTA (57.0%) from the GSM-Tracktor [22] method.

On the MOT17 dataset, however, which contains more object detections and more accurate ground truth, our method achieves the highest 57% MOTA, highest 548 MT objects and smallest 228242 FN. The second highest MOTA is at 56.4% from GSM-Tracktor, 0.6% lower than ours. In comparison to our baseline Tracktor-v2, our method achieves an increase of 0.7% MOTA and 1% IDF1. The rest of trackers, such as TrctrD17 [39] and FAMNet [4] range from 53.7% to 52.0% MOTA, while other GNN-based methods such as GNMOT [19] and EDA-GNN [11] achieve 50.2% and 45.5% MOTA, respectively. Overall, our proposed method scores the highest MOTA with the lowest number of FNs and maintains a low number of ID switches. Additionally, it achieves the highest number of MT objects.

Figure 2(h) provides a visual comparison of the results of our proposed approach with two other top-performing trackers, GSM-Tracktor and Tracktor-v2. Two specific cases are shown of an object being tracked before and after occlusion to demonstrate identity switches. The colored boxes along with the numbers on the boxes indicate the identity (IDs) of the objects. Boxes with different colors and different numbers have different IDs. In the first two columns of images, a man with a white shirt and grey trousers is occluded for a few frames and then re-appears. In this case, the proposed method is able to recover the identity of the person (Figures 2(a) and 2(b)) while Tracktor-v2 gives a new ID to the person (Figures 2(e) and 2(f)). In the second two columns of images, a man with dark clothes is occluded for a few frames. The proposed technique identifies the same person, as shown in figures 2(c) and 2(d), while GSM-Trackor identifies him as a new person (Figures 2(g) and 2(h)).

4.4.2 Ablation Studies

To understand the importance of the individual components of our proposed approach, we perform a series of studies using ablations of our complete model. We report the performance of these ablations on all videos in the MOT17 train set instead of the test set. This is standard practice in the MOT literature [19] since performance on the test videos can only be assessed using the online evaluation server that has a limit of 4 attempts.

In the first line of ablation studies, we evaluate the importance of using a GCNN instead of using an FC-NN that is typically used in traditional MOT methods. We also evaluate the importance of the Sinkhorn algorithm to satisfy the constraints of bipartite graph matching. It can be seen in Table 2 that applying the GCNN instead of the FC-NN produces an increase of 1.7% MOTA, 4.1% IDF1, while reducing ID switches by 749. On the other hand, removing the Sinkhorn reduces the MOTA by 0.2% and IDF1 by 1.4%, while it increases ID switches by 40. This demonstrates the value of using GCNN along with Sinkhorn in our proposed approach.

In a second line of studies, we evaluated the importance of using the IoU metric to capture geometric features during affinity computation, in contrast to only using the geometric features for edge construction in GCNN, as is the convention in previous GNN based methods for MOT. Table 3 shows the results of this study where the “appearance only” ablation corresponds to only using the cosine similarity in , while “appearance + geometry” correspponds to using both cosine and IoU. It can be seen that using IoU leads to an increase of 4.4% in MOTA and a significant decrease of 5139 instances in FN. This indicates that ignoring IoU leads to a weaker affinity score. Finally, table 4 illustrates the importance of different number of layers used in the GCNN. Traditionally, GNNs do not require a large number of layers as CNNs. It is shown that just using 2-layers produces higher MOTA and IDF1 than using 1 or 3 layers.




FC-NN & Sinkhorn 60.5 60.4 193 112 1859 41531 1009
GCNN & No Sinkhorn 62 63.1 202 114 1318 41003 300
GCNN & Sinkhorn 62.2 64.5 204 112 1295 40879 260


Table 2: Ablation study on the effect of using GCNN instead of FC-NN and the Sinkhorn algorithm.




Appear. Only 57.8 64.2 177 139 1185 46018 241
Appear.+Geom. 62.2 64.5 204 112 1295 40879 260


Table 3: Ablation study on the effect of using IoU (geometric features) during affinity computation.




1-layer GCNN 61.5 61.2 190 114 1248 41612 386
2-layers GCNN 62.2 64.5 204 112 1295 40879 260
3-layers GCNN 62.0 63.0 201 112 1285 41035 404


Table 4: Ablation study on the effect of the number of layers used in the GCNN.

Figure 4 provides a visual analysis of some ablation studies. The first two columns of images in Figure 4 compare the effect of using GCNN instead of FC-NN in our proposed appraoch. As the two people on the right become more occluded, an identity switch occurs for the case of ID:28 and ID:21 in Figures 3(a) and 3(b) when using FC-NN. On the other hand, Figures 3(e) and 3(f) show that by using GCNN to capture interaction features, we obtain correct IDs despite the overlaps of the two boxes. In the next two columns, in Figures 3(c) and 3(d), only the appearance is used while in Figures 3(g) and 3(h), both appearance and geometry are used. It is clear that under blurry and low brightness conditions, a tracker using only appearance features for affinity computation is susceptible to ID switches.

5 Conclusions

In this paper, we have developed a novel method to handle online data association for Multi-Object Tracking. We have shown that using Graph Convolutional Neural Networks on top of Convolutional based features can achieve state-of-the-art tracking accuracy. A key innovation of our approach is to use a differentiable method, the Sinkhorn algorithm, to guide the association in an end-to-end learning fashion. Experimental results demonstrate top performance of our approach on the MOT16 & 17 Benchmarks. The proposed framework opens the avenue for further research pertaining to the use of Graph Neural Networks into feature extraction as well as involving association into the learning pipeline. Future work on this method could involve summarizing the historic appearance of each tracklet for more accurate long-term association.


  • [1] P. Bergmann, T. Meinhardt, and L. Leal-Taixe (2019) Tracking without bells and whistles. In Proceedings of the IEEE international conference on computer vision, pp. 941–951. Cited by: Figure 3, §4.3, Table 1.
  • [2] G. Brasó and L. Leal-Taixé (2019) Learning a neural solver for multiple object tracking. arXiv preprint arXiv:1912.07515. Cited by: §1, §2.1, §2.2, §4.3.
  • [3] J. Chen, H. Sheng, Y. Zhang, and Z. Xiong (2017) Enhancing detection model for multiple hypothesis tracking. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    pp. 18–27. Cited by: §2.1.
  • [4] P. Chu and H. Ling (2019) Famnet: joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6172–6181. Cited by: §4.4.1, Table 1.
  • [5] G. Ciaparrone, F. L. Sánchez, S. Tabik, L. Troiano, R. Tagliaferri, and F. Herrera (2020) Deep learning in video multi-object tracking: a survey. Neurocomputing 381, pp. 61–88. Cited by: §1, §2.1.
  • [6] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan (2009) Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence 32 (9), pp. 1627–1645. Cited by: §4.1.
  • [7] H. Feng, X. Li, P. Liu, and N. Zhou (2017) Using stacked auto-encoder to get feature with continuity and distinguishability in multi-object tracking. In International Conference on Image and Graphics, pp. 351–361. Cited by: §2.1.
  • [8] M. Fey and J. E. Lenssen (2019) Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, Cited by: §4.3.
  • [9] K. Ho, J. Keuper, and M. Keuper (2020)

    Unsupervised multiple person tracking using autoencoder-based lifted multicuts

    arXiv preprint arXiv:2002.01192. Cited by: §2.1.
  • [10] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §4.3.
  • [11] X. Jiang, P. Li, Y. Li, and X. Zhen (2019) Graph neural based end-to-end data association framework for online multiple-object tracking. arXiv preprint arXiv:1907.05315. Cited by: §1, §2.2, §4.4.1, Table 1.
  • [12] C. Kim, F. Li, and J. M. Rehg (2018) Multi-object tracking with neural gating using bilinear lstm. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 200–215. Cited by: §2.1.
  • [13] K. Kim, J. Kwon, and K. Cho (2017) Multi-object tracker using kemelized correlation filter based on appearance and motion model. In 2017 19th International Conference on Advanced Communication Technology (ICACT), pp. 761–764. Cited by: §2.1.
  • [14] M. Kim, S. Alletto, and L. Rigazio (2016) Similarity mapping with enhanced siamese network for multi-object tracking. arXiv preprint arXiv:1609.09156. Cited by: §2.1.
  • [15] H. W. Kuhn (1955) The hungarian method for the assignment problem. Naval research logistics quarterly 2 (1-2), pp. 83–97. Cited by: §1.
  • [16] B. Lavi, M. F. Serj, and I. Ullah (2018) Survey on deep learning techniques for person re-identification task. arXiv preprint arXiv:1807.05284. Cited by: §2.1.
  • [17] L. Leal-Taixé, C. Canton-Ferrer, and K. Schindler (2016) Learning by tracking: siamese cnn for robust target association. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 33–40. Cited by: §1.
  • [18] S. Lee and E. Kim (2018) Multiple object tracking via feature pyramid siamese networks. IEEE Access 7, pp. 8181–8194. Cited by: §1, §2.1.
  • [19] J. Li, X. Gao, and T. Jiang (2020) Graph networks for multiple object tracking. In The IEEE Winter Conference on Applications of Computer Vision, pp. 719–728. Cited by: §1, §2.2, §4.4.1, §4.4.1, §4.4.2, Table 1.
  • [20] X. Li, Y. Liu, K. Wang, Y. Yan, and F. Wang (2019) Multi-target tracking with trajectory prediction and re-identification. In 2019 Chinese Automation Congress (CAC), pp. 5028–5033. Cited by: §4.4.1, Table 1.
  • [21] Y. Liang and Y. Zhou (2018) Lstm multiple object tracker combining multiple cues. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 2351–2355. Cited by: §2.1.
  • [22] Q. Liu, Q. Chu, B. Liu, and N. Yu GSM: graph similarity model for multi-object tracking. Cited by: §1, §2.1, §2.1, Figure 3, §4.4.1, Table 1.
  • [23] C. Ma, Y. Li, F. Yang, Z. Zhang, Y. Zhuang, H. Jia, and X. Xie (2019-06) Deep association: end-to-end graph-based learning for multiple object tracking with conv-graph neural network. pp. 253–261. External Links: Document Cited by: §1, §2.2.
  • [24] A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler (2016) MOT16: a benchmark for multi-object tracking. arXiv preprint arXiv:1603.00831. Cited by: §4.1, §4.2.
  • [25] A. Milan, S. H. Rezatofighi, A. Dick, I. Reid, and K. Schindler (2017)

    Online multi-target tracking using recurrent neural networks


    Thirty-First AAAI Conference on Artificial Intelligence

    Cited by: §2.1.
  • [26] Multiple object tracking benchmark website. Note: https://motchallenge.net Cited by: Figure 3.
  • [27] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. External Links: Link Cited by: §4.3.
  • [28] G. Peyr, M. Cuturi, et al. (2019)

    Computational optimal transport: with applications to data science


    Foundations and Trends in Machine Learning

    11 (5-6), pp. 355–607.
    Cited by: §3.4.
  • [29] L. Ren, J. Lu, Z. Wang, Q. Tian, and J. Zhou (2018) Collaborative deep reinforcement learning for multi-object tracking. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 586–602. Cited by: §2.1.
  • [30] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §4.1.
  • [31] P. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich (2020) Superglue: learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4938–4947. Cited by: §2.2, §3.4.
  • [32] S. Tang, M. Andriluka, B. Andres, and B. Schiele (2017) Multiple people tracking by lifted multicut and person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3539–3548. Cited by: §2.1.
  • [33] M. Ullah and F. Alaya Cheikh (2018) A directed sparse graphical model for multi-target tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1816–1823. Cited by: §2.1.
  • [34] L. Wang, L. Xu, M. Y. Kim, L. Rigazico, and M. Yang (2017) Online multiple object tracking via flow and convolutional features. In 2017 IEEE International Conference on Image Processing (ICIP), pp. 3630–3634. Cited by: §2.1.
  • [35] Z. Wang, L. Zheng, Y. Liu, and S. Wang (2019) Towards real-time multi-object tracking. arXiv preprint arXiv:1909.12605. Cited by: §1.
  • [36] X. Weng, Y. Wang, Y. Man, and K. M. Kitani (2020) GNN3DMOT: graph neural network for 3d multi-object tracking with 2d-3d multi-feature learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6499–6508. Cited by: §1.
  • [37] N. Wojke, A. Bewley, and D. Paulus (2017) Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP), pp. 3645–3649. Cited by: §2.1.
  • [38] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip (2020) A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems. Cited by: §1.
  • [39] Y. Xu, A. Osep, Y. Ban, R. Horaud, L. Leal-Taixé, and X. Alameda-Pineda (2020) How to train your deep multi-object tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6787–6796. Cited by: §4.4.1, §4.4.1, Table 1.
  • [40] F. Yang, W. Choi, and Y. Lin (2016)

    Exploit all the layers: fast and accurate cnn object detector with scale dependent pooling and cascaded rejection classifiers

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2129–2137. Cited by: §4.1.
  • [41] T. Yang, C. Cappelle, Y. Ruichek, and M. El Bagdouri (2019) Multi-object tracking with discriminant correlation filter based deep learning tracker. Integrated Computer-Aided Engineering 26 (3), pp. 273–284. Cited by: §2.1.
  • [42] K. Yoon, D. Y. Kim, Y. Yoon, and M. Jeon (2019) Data association for multi-object tracking via deep neural networks. Sensors 19 (3), pp. 559. Cited by: §2.1.
  • [43] Y. Zhang, H. Sheng, Y. Wu, S. Wang, W. Ke, and Z. Xiong (2020) Multiplex labeling graph for near online tracking in crowded scenes. IEEE Internet of Things Journal. Cited by: §4.4.1, Table 1.
  • [44] J. Zhu, H. Yang, N. Liu, M. Kim, W. Zhang, and M. Yang (2018) Online multi-object tracking with dual matching attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 366–382. Cited by: §2.1.