Multi-object tracking (MOT) is a widely studied computer vision problem of tracking multiple objects across video frames, that has several applications including autonomous vehicles, robot navigation, medical imaging, and visual surveillance. One of the major paradigms in MOT is the tracking-by-detection paradigm, where an object detector is first used to extract object locations at each frame separately, followed by a tracker which associates detected objects across frames. The goal of the tracker is to solve the bipartite graph matching problem, where every object instance in a past frame is associated to at most one object instance in the current frame, using pair-wise object affinities. There are two variants of the matching problem considered in MOT: online matching, where objects are associated only using past frames, and offline matching, where information from both past and future frames are used to track a given object. In this work, we only focus on the MOT problem involving online matching.
One of the conventional approaches in online matching is to learn appearance similarity functions among pairs of objects across consecutive frames through the use of Siamese Convolutional Neural Networks (CNN) architectures during training, e.g., using pairwise loss [18, 17] and triplet loss. However, these approaches treat feature extraction and object association as two isolated tasks and only deal with the optimization aspect of object association during testing using traditional algorithms such as the Hungarian  that leads to inferior accuracy. Another limitation is that these methods do not take into account the relative locations of objects during feature learning.
Recently there have been attempts to merge the feature extraction and object association tasks using Graph Neural Networks (GNN) [38, 2, 36, 19], that have achieved state-of-the-art performance on benchmark MOT problems . These approaches take advantage of the graph nature of the problem by using CNN to learn features and GNN to associate objects. By embedding appearance and geometric information into the graph structure, these approaches allow object features to be learned while taking into account object interactions in the network. Previous MOT approaches based on GNN [23, 2, 11, 36]
have attempted to satisfy bipartite one-to-one matching constraints using loss functions such as cross-entropy loss in an end-to-end architecture. However, as demonstrated in, they do not always enforce these constraints accurately, leaving room for improvement in performance. This is especially true for the online matching problem where GNN based methods are known to show poor performance [11, 19, 22].
In this paper, we propose a CNN and Graph Convolutional Neural Network (GCNN) based approach for MOT, depicted in Figure 1, to accurately solve the online matching problem subject to constraints specific to the MOT task. In our proposed approach, we model each object as a tracklet and feasible connections between tracklets from previous frames and new detections at the current frame form the edges of the graph. A CNN extracts appearance features of the tracklets and a GCNN updates these features through the interaction of the nodes (tracklets) based on their connectivity. Finally, a Sinkhorn based normalization is applied to enforce the MOT constraints of the bipartite matching problem. Here is a summary of our contributions:
We propose an online tracking method based on graph convolutional neural networks that achieves top performance in comparison to existing online and supervised approaches on the MOT16 & 17 benchmarks.
In contrast to traditional MOT approaches that learn appearance features of every object separately using Siamese architectures, our proposed approach operates on an arbitrarily large neighborhood of objects, incorporating context information such as location and object sizes using GCNN.
While previous GNN based approaches use loss functions to satisfy bipartite matching constraints, we introduce a novel approach of using the Sinkhorn normalization to enforce those constraints, reducing the number of Identity Switches and False Negatives as demonstrated in our empirical results.
In contrast to other GNN based approaches for MOT, we use the geometric information not only during graph edge construction but also during affinity computation, thus significantly improving accuracy.
2 Related Work
2.1 Multi-Object Tracking
A majority of previous work in MOT is based on the paradigm of tracking-by-detection , which comprises of three basic stages. In the first stage of detection, objects are identified at every frame using bounding boxes. In the next stage of feature extraction, feature extraction methods are applied on the detected objects to extract appearance, motion and other interaction features of the objects, which are then used to compute similarity or affinity scores among object pairs. In the final stage of association, an assignment problem is solved to match objects at previous frames with objects at the current frame.
For feature extraction, a number of methods have been introduced for appearance feature extraction, including deep learning methods such as Siamese Networks[16, 14, 18], auto-encoders [7, 9], correlation filters [41, 13], feature pyramids , and spatial attention 
. Motion extraction has also been an integral part of tracking and a number of methods have been developed utilizing Kalman Filters, optical flow , LSTM
, among others. A number of methods have also been developed for computing pair-wise affinities. Common techniques include the use of metrics such as Intersection over Union and cosine similarity, LSTM variants (e.g., bi-directional, bilinear , and Siamese 
), and multi-layer perceptrons. The final task of association is commonly handled using a number of approaches such as the Hungarian algorithm, multiple hypothesis tracking , dynamic programming , and lifted multi-cut 
. Recent examples of methods for assignment using deep learning include reinforcement learning and the use of graphs .
Despite significant developments in the field of MOT, there is still a large margin for improving performance especially in terms of the number of identity switches—a critical aspect of tracking performance. One of the limitations of aforementioned approaches for MOT is that they perform feature learning without incorporating the geometric context of the features. As demonstrated in some recent approaches [22, 2], incorporating the relative appearance and geometry of objects and allowing them to interact has the potential to create stronger matches and provide more robust associations, therefore reducing the switches.
2.2 Graph Neural Network based Tracking
In an effort to incorporate object interactions during tracking as well as to combine the steps of feature learning and matching, Graph Neural Networks have recently been introduced for tracking. For example, a GCNN was used to update node features in , where the nodes are individual detections at every frame. After the GCNN updates, an adjacency matrix was computed using the cosine similarity of node features in the embedding space, which was then used to assign detections to existing tracklets or create new tracklets. Another approach proposed in  uses Message Passing Networks to perform edge-based binary label propagations over the graph of detections. In another work by Jiang et al. , a method was proposed to learn both an appearance model using two frames (similar to a Siamese network) and a geometry model using LSTM. The assignment task was solved using a GNN trained using three loss functions, one for binary classification, one for multi-class classification, and another one for birth or death of tracks. In Li et al. , the authors propose using two GNNs, one for learning appearance features and another for learning motion features.
While GNN based methods have a lot of promise for MOT, existing approaches have yet to become as accurate and robust as compared to other baselines, especially in the task of online tracking. We posit that one of the reasons for the limited accuracy of GNN based methods is that they satisfy the constraints of bipartite matching only using training losses or dedicated neural networks, while there may be other superior approaches for satisfying MOT constraints exactly during tracking and association. For example, as demonstrated in a related problem of point matching , the Sinkhorn algorithm is effective in ensuring constraint satisfaction and can be employed both during training and testing, as compared to conventional association algorithms such as the Hungarian method that can only be invoked during testing. In contrast to existing GNN based methods, the use of the Sinkhorn algorithm during association is one of the key innovations of our proposed approach.
3 Proposed Approach
3.1 Problem Statement
We are given a set of detections at a current frame as and a set of historic objects (or tracklets) as . We are also given bounding box images to represent the detections and tracklets. Note that while the bounding box images for detections correspond to the current frame
, the images for tracklets correspond to time-points when they were last observed in past frames. Furthermore, apart from the bounding box images, we also have information about the geometric features of every detection and tracklet, represented as a 4-length vector,, comprising of the bounding box center’s horizontal position (), vertical position (), box’s width (), and height ().
On the training set, we are given ground-truth labels for the association between detection and tracklet represented as , which can either be 1 (match) or 0 (no match). Note that a tracklet can be associated to at most one detection, and it is possible for new detections to appear as well as existing tracklets to disappear at any frame. The goal of MOT then is to learn a model that can predict the association between detection and tracklet as across all time-points. In other words, we want to learn the optimal association matrix such that:
The problem of learning can also be viewed as a bipartite graph matching problem, where the graph comprises of nodes and bipartite edges connecting a tracklet node to a detection node if there exists a match between and in . The goal of MOT then is to learn a model to recover the adjacency matrix of the graph given the image appearance features and geometric features of the nodes as well as MOT matching constraints.
3.2 Proposed Approach Overview
Figure 2 provides an overview of our proposed approach that comprises of two basic components. First, we extract features for tracklets and detections using a combination of CNN and GCNN. In particular, we use the CNN to extract appearance features given the bounding box images of tracklets and detections. We also leverage the geometric features at every node, which are concatenated with at pairs of nodes to extract edge features using a fully connected neural network (FC-NN), . The extracted edge features , along with node features are then fed to a GCNN to extract interaction features at every node.
In the second component, we use the extracted features at nodes to compute affinities between every pair of tracklet and detection node as follows. We first compute the cosine similarity between the interaction features at and . We then compute the intersection over union (IoU) of the bounding box areas represented by the geometric features at and . The cosine similarity and IoU are then fed to a FC-NN, to produce a real-valued score representing the affinity between and . These affinity scores are then normalized across rows and columns using the Sinkhorn algorithm to satisfy the MOT constraints and produce the final association matrix
. During testing, the Hungarian algorithm is applied to binarizeusing a threshold to produce hard assignments between tracklets and detections. In the following, we provide brief descriptions of the two components of our proposed approach.
3.3 Feature Extraction Component
We use the bounding boxes for tracklets and detections available through publicly available detectors as the set of inputs for feature extraction. We first obtain the cropped image for every bounding box that are fed into a CNN architecture to extract appearance features of tracklets and detections, available as flat high-dimensional vectors. The conventional approach in MOT is to map such high-dimensional vectors to lower-dimensional embeddings using fully connected neural networks (FC-NN), which are then used for classification, re-identification, and many other tasks. However, by only using CNN and FC-NN, this approach does not incorporate the interaction effects between different objects (e.g., detections and tracklets) that are prevalent in MOT. To address this, we consider the goal of extracting interaction features at tracklets and detections using a GCNN architecture instead of FC-NNs.
The inputs to our GCNN architecture consist of node and edge features, where the nodes comprise of tracklets and detections while edges denote bipartite matches between tracklets and detections. The node features at any node is simply the appearance features at . To compute the edge features for a pair of tracklet and detection nodes, we first concatenate the appearance features and geometric features at the pair of nodes and then send them to a FC-NN to produce .
Our GCNN architecture comprises of a number of hidden layers where at every layer , the node and edge features produced at layer
are non-linearly transformed using the neighborhood structure of the graph to produce node and edge features at layer, namely and , respectively. Note that at layer , and are the input node and edge features, respectively. To understand the update operations at layer , let us denote the adjacency matrix of the graph including self-edges at layer as , where
is an identity matrix. Further, let the degree of the adjacency matrixbe denoted by . The node features are then updated at layer as:
where are learnable weights of the GCNN. Once has been updated, the edge features for an edge between nodes and are updated as:
where is a FC-NN. The node features produced at the final layer are termed as the interaction features, .
3.4 Association Component
We use the interaction features extracted by GCNN along with the geometric features to compute the affinity of a tracklet to be associated with a detection using two simple metrics. First, we compute the cosine similarity between the features at and to capture any interaction effects between the two objects discovered by the GCNN. Second, given the importance of the geometric features of and in determining their association affinity, we further compute the intersection over union (IoU) of the bounding boxes of the two objects. This is different from existing GNN based approaches for MOT that only use the geometric information of objects during graph edge construction but not during affinity computation, thus making incomplete use of the information available in geometric features. Note that a higher value of IoU indicates a higher affinity score. We feed the cosine similarity score and IoU score to another FC-NN, , that produces the affinity score, .
Note that the affinity matrix S is constructed in such a way that each element represents the assignment score of trackletto detection . Since detections might not be associated with any tracklet and vice versa (denoting births and deaths of objects), we augment by adding a vector of rows and columns at the end of the matrix to produce a new of size . Further, note that the optimal is subject to the following MOT constraints:
We initialize using a default value of . The conventional approach for satisfying MOT constraints (Equations 4, 5, 6, and 7) is to make use of specialized loss functions that can only be applied during training. In contrast, we leverage the Sinkhorn algorithm to automatically satisfy our MOT constraints both during training and testing, by iteratively normalizing the rows and columns of without the need for specialized loss functions. Each element is transformed using:
where is a hyper-parameter representing the entropic regularization effect (larger value of generates greater separation in ). After a fixed number of iterations, the Sinkhorn algorithm produces the final association matrix , where we drop the last row and column. Apart from satisfying the MOT constraints, an additional advantage of the Sinkhorn algorithm is that it is fully differentiable at every iteration. We can thus feed directly to the objective function of the end-to-end learning framework of our proposed approach, that involves minimizing the following weighted binary cross-entropy loss:
where is a weight hyper-parameter to balance the imbalance among 1’s and 0’s in the ground-truth labels . During testing, is first binarized using a cut-off threshold and then the Hungarian method is applied over to perform hard assignments of 0 or 1.
4 Experimental Analysis
We evaluate our proposed approach on the publicly available MOT challenge datasets  that serve as a benchmark for comparing MOT performance of state-of-the-art methods using a standardized leader-board. We specifically focus on MOT16 and MOT17 challenge datasets that include annotations of detected objects including pedestrians in urban environments and have been widely used in the MOT community. MOT16 and ’17 contain 7 train and 7 test sequences, each containing 525 to 1,050 frames spanning diverse real-world environments. While both datasets cover the same videos, they differ in their provided detections as MOT16 detections are obtained from DPM while MOT17 provides additional detections from FASTER RCNN and SDP. Also ’17 has more accurate ground truth annotations for tracking than ’16. From each of the provided datasets, the train set is split into training and validation sequences by holding off the last 150 frames of each video for validation. After training the model, we apply our proposed model on the test set using the online evaluation server .
4.2 Evaluation Metrics
We consider standard metrics used in MOT literature and reported on the MOT challenge leader-boards including the Multi-Object Tracking Accuracy (MOTA), Identity F1 score (IDF1), Mostly Tracked objects (MT, the ratio of ground-truth trajectories that are correctly predicted by at least 80%), Mostly Lost objects (ML, the ratio of ground-truth objects that are correctly predicted at most 20%), False Positives (FP), False Negatives (FN), ID Switches (ID Sw.) and the frames per second in runtime (Hz) .
4.3 Implementation of Proposed Approach
We used DenseNet-121 
as our choice of the CNN architecture with all fully connected layers at the end of the network replaced by GCNN. All activation functions used in the Network are ReLU. Also, all FC-NNs used as metric learner functions in our proposed approach consist of a simple architecture with no hidden layers. The GCNN consists of two hidden layers and in the output space of GCNN,, no activation function is used as the hidden layers contain sufficient non-linearity. The dimensionality of and is 1,024 and 128, respectively, while the cropped bounding box image size is pixels. The slack variable was set at 0.2, while the regularization parameter was set at 5, and the number of Sinkhorn iterations set at 8. The weight hyper-parameter is set to 10 while
is set to 0.2. All codes were developed in Pytorch and Pytorch-Geometric.
We trained and tested our model on an Intel 2.6 GHz CPU cluster with NVIDIA TITAN RTX GPUs. The learning rate is set at and regularization parameter for Adam optimizer. Batch size is set to 12. Also, at each frame during training, we sample a random frame as previous frame going back up to 45 frames in order to provide more challenging matches. This introduces more cases of occlusion and significant appearance changes making our algorithm more robust during testing.
Public detections in MOT16 and ’17 datasets are noisy and have many missing objects. Tracktor is considered as a baseline approach that can partially alleviate the problem of missing and false detections. Similar to , we adopt the same processed detections upon which the Tracktor has been applied. Specifically, all detections with a confidence threshold less than 0.5 are ignored. For the remaining ones, Tracktor propagates a bounding box from the previous frame to the next by placing it into the same position and performing regression using the FRCNN regression head. Remaining detections are pruned using an NMS threshold of 0.8 and used for matching in our algorithm. To further reduce false positives, a pruning approach is followed in which an object needs to appear more than 2 times in the last 15 frames since it was first observed to remain as an active tracklet. Finally, feasible matches are only created if the candidate objects fall within a pixel distance of 200.
4.4.1 Benchmark Evaluation
Table 1 shows a comparison of the performance of our proposed approach with that of top-performing online supervised approaches on MOT16 and MOT17 leader-boards that use public detections. On the MOT16 dataset, our method produces the highest number of MT objects and surpasses the state-of-the-art baseline Tracktor-v2 by 0.7% MOTA and 1% IDF1. It also achieves better results than other tracking algorithms (trackers) such as TrctrD16 , MLT , PV  and the GNN-based GNMOT , with MOTAs that range from 54.8 to 47.7%, while ours is at 56.9%. This is very much comparable with the best MOTA (57.0%) from the GSM-Tracktor  method.
On the MOT17 dataset, however, which contains more object detections and more accurate ground truth, our method achieves the highest 57% MOTA, highest 548 MT objects and smallest 228242 FN. The second highest MOTA is at 56.4% from GSM-Tracktor, 0.6% lower than ours. In comparison to our baseline Tracktor-v2, our method achieves an increase of 0.7% MOTA and 1% IDF1. The rest of trackers, such as TrctrD17  and FAMNet  range from 53.7% to 52.0% MOTA, while other GNN-based methods such as GNMOT  and EDA-GNN  achieve 50.2% and 45.5% MOTA, respectively. Overall, our proposed method scores the highest MOTA with the lowest number of FNs and maintains a low number of ID switches. Additionally, it achieves the highest number of MT objects.
Figure 2(h) provides a visual comparison of the results of our proposed approach with two other top-performing trackers, GSM-Tracktor and Tracktor-v2. Two specific cases are shown of an object being tracked before and after occlusion to demonstrate identity switches. The colored boxes along with the numbers on the boxes indicate the identity (IDs) of the objects. Boxes with different colors and different numbers have different IDs. In the first two columns of images, a man with a white shirt and grey trousers is occluded for a few frames and then re-appears. In this case, the proposed method is able to recover the identity of the person (Figures 2(a) and 2(b)) while Tracktor-v2 gives a new ID to the person (Figures 2(e) and 2(f)). In the second two columns of images, a man with dark clothes is occluded for a few frames. The proposed technique identifies the same person, as shown in figures 2(c) and 2(d), while GSM-Trackor identifies him as a new person (Figures 2(g) and 2(h)).
4.4.2 Ablation Studies
To understand the importance of the individual components of our proposed approach, we perform a series of studies using ablations of our complete model. We report the performance of these ablations on all videos in the MOT17 train set instead of the test set. This is standard practice in the MOT literature  since performance on the test videos can only be assessed using the online evaluation server that has a limit of 4 attempts.
In the first line of ablation studies, we evaluate the importance of using a GCNN instead of using an FC-NN that is typically used in traditional MOT methods. We also evaluate the importance of the Sinkhorn algorithm to satisfy the constraints of bipartite graph matching. It can be seen in Table 2 that applying the GCNN instead of the FC-NN produces an increase of 1.7% MOTA, 4.1% IDF1, while reducing ID switches by 749. On the other hand, removing the Sinkhorn reduces the MOTA by 0.2% and IDF1 by 1.4%, while it increases ID switches by 40. This demonstrates the value of using GCNN along with Sinkhorn in our proposed approach.
In a second line of studies, we evaluated the importance of using the IoU metric to capture geometric features during affinity computation, in contrast to only using the geometric features for edge construction in GCNN, as is the convention in previous GNN based methods for MOT. Table 3 shows the results of this study where the “appearance only” ablation corresponds to only using the cosine similarity in , while “appearance + geometry” correspponds to using both cosine and IoU. It can be seen that using IoU leads to an increase of 4.4% in MOTA and a significant decrease of 5139 instances in FN. This indicates that ignoring IoU leads to a weaker affinity score. Finally, table 4 illustrates the importance of different number of layers used in the GCNN. Traditionally, GNNs do not require a large number of layers as CNNs. It is shown that just using 2-layers produces higher MOTA and IDF1 than using 1 or 3 layers.
|FC-NN & Sinkhorn||60.5||60.4||193||112||1859||41531||1009|
|GCNN & No Sinkhorn||62||63.1||202||114||1318||41003||300|
|GCNN & Sinkhorn||62.2||64.5||204||112||1295||40879||260|
Figure 4 provides a visual analysis of some ablation studies. The first two columns of images in Figure 4 compare the effect of using GCNN instead of FC-NN in our proposed appraoch. As the two people on the right become more occluded, an identity switch occurs for the case of ID:28 and ID:21 in Figures 3(a) and 3(b) when using FC-NN. On the other hand, Figures 3(e) and 3(f) show that by using GCNN to capture interaction features, we obtain correct IDs despite the overlaps of the two boxes. In the next two columns, in Figures 3(c) and 3(d), only the appearance is used while in Figures 3(g) and 3(h), both appearance and geometry are used. It is clear that under blurry and low brightness conditions, a tracker using only appearance features for affinity computation is susceptible to ID switches.
In this paper, we have developed a novel method to handle online data association for Multi-Object Tracking. We have shown that using Graph Convolutional Neural Networks on top of Convolutional based features can achieve state-of-the-art tracking accuracy. A key innovation of our approach is to use a differentiable method, the Sinkhorn algorithm, to guide the association in an end-to-end learning fashion. Experimental results demonstrate top performance of our approach on the MOT16 & 17 Benchmarks. The proposed framework opens the avenue for further research pertaining to the use of Graph Neural Networks into feature extraction as well as involving association into the learning pipeline. Future work on this method could involve summarizing the historic appearance of each tracklet for more accurate long-term association.
-  (2019) Tracking without bells and whistles. In Proceedings of the IEEE international conference on computer vision, pp. 941–951. Cited by: Figure 3, §4.3, Table 1.
-  (2019) Learning a neural solver for multiple object tracking. arXiv preprint arXiv:1912.07515. Cited by: §1, §2.1, §2.2, §4.3.
Enhancing detection model for multiple hypothesis tracking.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 18–27. Cited by: §2.1.
-  (2019) Famnet: joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6172–6181. Cited by: §4.4.1, Table 1.
-  (2020) Deep learning in video multi-object tracking: a survey. Neurocomputing 381, pp. 61–88. Cited by: §1, §2.1.
-  (2009) Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence 32 (9), pp. 1627–1645. Cited by: §4.1.
-  (2017) Using stacked auto-encoder to get feature with continuity and distinguishability in multi-object tracking. In International Conference on Image and Graphics, pp. 351–361. Cited by: §2.1.
-  (2019) Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, Cited by: §4.3.
Unsupervised multiple person tracking using autoencoder-based lifted multicuts. arXiv preprint arXiv:2002.01192. Cited by: §2.1.
-  (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §4.3.
-  (2019) Graph neural based end-to-end data association framework for online multiple-object tracking. arXiv preprint arXiv:1907.05315. Cited by: §1, §2.2, §4.4.1, Table 1.
-  (2018) Multi-object tracking with neural gating using bilinear lstm. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 200–215. Cited by: §2.1.
-  (2017) Multi-object tracker using kemelized correlation filter based on appearance and motion model. In 2017 19th International Conference on Advanced Communication Technology (ICACT), pp. 761–764. Cited by: §2.1.
-  (2016) Similarity mapping with enhanced siamese network for multi-object tracking. arXiv preprint arXiv:1609.09156. Cited by: §2.1.
-  (1955) The hungarian method for the assignment problem. Naval research logistics quarterly 2 (1-2), pp. 83–97. Cited by: §1.
-  (2018) Survey on deep learning techniques for person re-identification task. arXiv preprint arXiv:1807.05284. Cited by: §2.1.
-  (2016) Learning by tracking: siamese cnn for robust target association. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 33–40. Cited by: §1.
-  (2018) Multiple object tracking via feature pyramid siamese networks. IEEE Access 7, pp. 8181–8194. Cited by: §1, §2.1.
-  (2020) Graph networks for multiple object tracking. In The IEEE Winter Conference on Applications of Computer Vision, pp. 719–728. Cited by: §1, §2.2, §4.4.1, §4.4.1, §4.4.2, Table 1.
-  (2019) Multi-target tracking with trajectory prediction and re-identification. In 2019 Chinese Automation Congress (CAC), pp. 5028–5033. Cited by: §4.4.1, Table 1.
-  (2018) Lstm multiple object tracker combining multiple cues. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 2351–2355. Cited by: §2.1.
-  GSM: graph similarity model for multi-object tracking. Cited by: §1, §2.1, §2.1, Figure 3, §4.4.1, Table 1.
-  (2019-06) Deep association: end-to-end graph-based learning for multiple object tracking with conv-graph neural network. pp. 253–261. External Links: Cited by: §1, §2.2.
-  (2016) MOT16: a benchmark for multi-object tracking. arXiv preprint arXiv:1603.00831. Cited by: §4.1, §4.2.
Online multi-target tracking using recurrent neural networks. In
Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §2.1.
-  Multiple object tracking benchmark website. Note: https://motchallenge.net Cited by: Figure 3.
-  (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. External Links: Cited by: §4.3.
Computational optimal transport: with applications to data science.
Foundations and Trends in Machine Learning11 (5-6), pp. 355–607. Cited by: §3.4.
-  (2018) Collaborative deep reinforcement learning for multi-object tracking. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 586–602. Cited by: §2.1.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §4.1.
-  (2020) Superglue: learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4938–4947. Cited by: §2.2, §3.4.
-  (2017) Multiple people tracking by lifted multicut and person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3539–3548. Cited by: §2.1.
-  (2018) A directed sparse graphical model for multi-target tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1816–1823. Cited by: §2.1.
-  (2017) Online multiple object tracking via flow and convolutional features. In 2017 IEEE International Conference on Image Processing (ICIP), pp. 3630–3634. Cited by: §2.1.
-  (2019) Towards real-time multi-object tracking. arXiv preprint arXiv:1909.12605. Cited by: §1.
-  (2020) GNN3DMOT: graph neural network for 3d multi-object tracking with 2d-3d multi-feature learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6499–6508. Cited by: §1.
-  (2017) Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP), pp. 3645–3649. Cited by: §2.1.
-  (2020) A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems. Cited by: §1.
-  (2020) How to train your deep multi-object tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6787–6796. Cited by: §4.4.1, §4.4.1, Table 1.
Exploit all the layers: fast and accurate cnn object detector with scale dependent pooling and cascaded rejection classifiers. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2129–2137. Cited by: §4.1.
-  (2019) Multi-object tracking with discriminant correlation filter based deep learning tracker. Integrated Computer-Aided Engineering 26 (3), pp. 273–284. Cited by: §2.1.
-  (2019) Data association for multi-object tracking via deep neural networks. Sensors 19 (3), pp. 559. Cited by: §2.1.
-  (2020) Multiplex labeling graph for near online tracking in crowded scenes. IEEE Internet of Things Journal. Cited by: §4.4.1, Table 1.
-  (2018) Online multi-object tracking with dual matching attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 366–382. Cited by: §2.1.