1 Introduction
Multi-object tracking (MOT), aiming to estimate the locations and identity of multiple targets in a video sequence, is a fundamentally challenging task in computer vision
[1]. Recently, the Intersection over Union (IoU) and Hungarian method are commonly used in the tracking phase among many tracking-by-detection paradigm[2, 3, 4, 5, 6, 7, 8, 9, 10]. However, when the target is occluded or lost for a period of time, it is difficult to retrieve the correct identity only using the IoU distance. As a result, the identity switching of targets occurs from time to time. To alleviate this problem, many methods start to introduce the Re-identity feature of targets. Among them, the JDE-based methods[11, 12, 13, 14, 15, 16, 17] have become popular due to their simplicity and efficiency.In the part of the data association, the accuracy of similarity measurement determines the tracking performance. Most detection-based methods use the IoU distance as the similarity matrix in the cascade matching strategy, while JDE-based methods fuse the motion information and appearance information as the similarity matrix for the linear assignment in the first matching and use the IoU distance in the next matching. However, none of these existing methods is the best expression of the similarity matrix according to our experiments.

When objects are occluded due to interlacing, it will produce confusing sets, which are difficult to allocate correctly, e.g. the set {det4, det7, track4, track10} in Figure 1 (a) and (b), and the set {det4, det7, track4, track9} in Figure 1 (c). When assigning these confusing sets, the inaccurate similarity distance leads to tracking failure. Based on the Hungarian method, the IoU distance matrix tends to match det4 with track4 and det7 with track10, and the EM distance matrix tends to match det4 with track4 and det7 with track9. Both of them lead to the target identity switching as shown in MOT17-Seq-11 of Figure 2
. The principal reason for these matching failure is the inaccurate prediction from the Kalman filter as the time of target loss becomes longer. Clearly, this results in inaccurate IoU distance and motion information distance, which leads to the problem of linear allocation errors.
To solve this problem, we propose the EG matrix which utilizes the embedding cosine distance for long-range tracking of targets and the Giou distance for limiting the matching range of embedding. To illustrate the robustness of the EG matrix, we apply it to 5 different JDE-based methods. As can be seen in Section 4.3, our implements obtain improvements in MOT metrics, including tracking speed, HOTA, IDF1, and IDsw metrics.
To further explore the good property of the EG matrix, we propose a simple tracking framework named SimpleTrack. In this framework, we design a bottom-up branch to represent Re-id features. Different from the fusion method of detection features, it pays more attention to the high-level semantic layers. For the tracking part of SimpleTrack, we propose a novel tracking retrieval mechanism and design a new tracking strategy based on our EG matrix. The experimental results show that our tracking strategy can surpass the JDE-based methods in most metrics, including tracking speed. Compared with the current SOTA method BYTE, our tracking strategy can also improve the performances of HOTA, IDF1, and IDsw metrics.

2 Related Work
2.1 Joint Detection and Embedding
JDE-based methods typically employ a single network to directly predict detection and appearance features[11, 12, 13, 14, 15, 16, 17]
. In general, these methods employ the single backbone to predict both object bounding boxes and appearance features. However, the competitive relationship between detection and identification hurts the optimization procedure in the multi-task learning of object detection and appearance feature extraction.
Recently, to tackle this problem, CSTrack[11] first designs a decoupling module to enhance the learned representation for both object detection and appearance identification. RelationTrack[18] uses a channel attention mechanism to decouple detection and Re-identity. Different from CSTrack and RelationTrack, the decoupling strategy adopted in our SimpleTrack focuses on the essence of the appearance feature. We start decoupling from the feature layer fusion of the network. In contrast to the detection feature fusion, we adopt a bottom-up fusion method.
2.2 Similarity Matrices
Location, motion and appearance are the most common cues in Multi-object tracking. They are also combined together for the linear assignment. Detection-based methods[10] utilize the IoU distance as the similarity matrix and the tracking accuracy mainly depends on the detector. SORT[2] fuses position and motion cues as the similarity matrix, which can achieve good results in short-range matching. DeepSORT[7] improves the long-range tracking ability of trackers by merging appearance and motion cues, which is usually used in JDE-based methods.
All these methods use location cue or fuse appearance and motion information as the similarity matrix. However, we design the similarity matrix combined with appearance and location information and use the Giou distance matrix as the location cue instead of the common IoU matrix.
2.3 Tracking Strategy
The assignment problem of target tracking and detection can be solved by Hungarian Algorithm based on different similarity matrices. SORT associates the detection objects with the tracking objects by once matching. DeepSORT adopts a cascade matching method that reduces unmatched tracking targets. MOTDT[19] first uses the appearance similarity matrix and the IoU distance matrix as the similarity matrix for cascade matching respectively.
Recently, BYTETrack[10] proposes to use low-confidence detection results for secondary matching, which reduces the problem of target detection failure due to occlusion. Thereby, the occurrence of long-range tracking could be declined, making the linear assignment based on the IoU distance matrix more effective. MAA[20] adopts different strategies for the blurred detection targets and tracking targets in the similarity matrix. The method can alleviate the inaccuracy of similarity distance caused by the ambiguous targets. Both of them are aim to make up for the shortcomings of the similarity matrix. Based on the idea of BYTE[10], we redesign the similarity matrix for the JDE-based method and construct a new matching strategy.
3 SimpleTrack
In this section, we present the technical details of SimpleTrack, as illustrated in Figure 3. It is composed of the feature decoupling, the similarity matrix as well as the tracking strategy.

3.1 Feature Decoupling
We adopt DLA-34 as backbone in order to strike a good balance between accuracy and speed. For feature decoupling, we employ different feature fusion methods for detection and ReID representation. As illustrated in Figure 3, for the detection branch, the feature fusion method still adopts the structure of IDA-up in Fairmot[17]. We call it the up-to-bottom fusion method based on low-level feature maps and continuously fusing higher-level feature maps.
However, Re-ID features tend to learn higher-level semantic features to distinguish different features among homogeneous objects. Therefore, we take a simple bottom-up approach to fusing feature maps. Denote the input feature maps by , where is the number of feature layers of different resolutions extracted by the backbone network. Then, the process of bottom-up fusion method can be expressed as
(1) |
where represents an upsampling operation composed of the deformable convolution and the deconvolution, denotes a convolution layer for changing channels of features, represents the Sigmoid activation layer.
It could be observed from Equation (1) that the fusion process is from bottom to top, and the previously fused feature map guides the lower-level feature map until the final fusion result is obtained. As will be shown by the experimental results in Section 4, the computational cost required by this fusion method is minimal.
3.2 Embedding and Giou Matrix
The similarity matrix is usually constructed from location, motion and appearance information. Let , , denote the location distance matrix, the motion distance matrix and the appearance distance matrix respectively. We fuse and as the similarity matrix called the EG matrix. And, can be represented as
(2) |
where and represent the bounding boxes of the tracking objects and the bounding boxes of the detection objects respectively, and is the minimum enclosing rectangle sets of the above bounding boxes.
Note that the matrix in Equation (2) is actually the Giou distance matrix and that the matrix in Equation (3) defines the Cosine distance matrix. Then, the Embedding and Giou matrix, which is also denoted as , can be represented as
(4) |
where and
represent two hyperparameters,
denotes Giou distance matrix and .3.3 Tracking Strategy in SimpleTrack
Inspired by BYTE[10], we develop a tracking strategy based our EG matrix. As shown in Algorithm 1, we follow the idea of secondary matching with low confidence detection adopted in BYTE, and use the EG matrix to replace the similarity matrix in the cascade matching. In addition, after the secondary matching, we utilize the cosine distance to retrieve the unmatched tracklets.
In the retrieving process presented by red texts in Algorithm 1, we use Kalman filter to predict the center point position of the unmatched tracking targets. In order to compensate for the drift of Kalman filter, we select the appearance embedding vectors in the 33 range around the prediction center point. Afterward, the minimum cosine distance between these vectors and the embedding vector memorized in the unmatched tracking object can be calculated. If the distance is less than the threshold, the tracklet is retrieved. By tracking retrieval mechanism, we can recover the occluded(failed) detection boxes by using the predictions of Kalman filter.
4 Experiments
4.1 Datasets and Metrics
4.1.1 Datasets.
We evaluate SimpleTrack on private detection tracks of MOT17[21] and MOT20[22] datasets. The former contains 14 different video sequences for multi-target tracking recorded by fixed or moving cameras. The latter consists of 8 video sequences with fixed camera focusing on tracking in very crowded scenes, 4 for training and testing each. For ablation studies, we follow [23, 24, 25, 26, 27] and split the train set into two parts for ablative experiments as the annotations of the test split are not publicly available. We fuse the CrowdHuman[28] and MOT17 half as the training dataset for ablation experiments following [29, 26, 30, 10, 27]. We add the ETH[31], CityPerson[32], CalTech[33], CUHK-SYSU[34] and PRW[35] datasets for training following [11, 15, 17] when testing on the test set of MOT17.
4.1.2 Evaluation Metrics.
To evaluate tracking performance, we use TrackEval[36] to evaluate all metrics, including MOTA[37], IDF1[38], false positives (FP), false negatives (FN), identity switches (IDSW), and recently proposed HOTA[39]. HOTA can comprehensively evaluate the performance of detection and data association. IDF1 focuses more on the association performance and MOTA evaluates the detector ability and focuses more on detection performance.
4.2 Implementation Details
4.2.1 Tracker.
In the tracking phase, the default high detection score threshold is 0.3, the low threshold is 0.2, the trajectory initialization score is 0.6, and the trajectory retrieval score is 0.1, unless otherwise specified. In the linear assignment step, for the high-confidence detection, the assignment threshold is 0.8, and for the low-confidence detection, the assignment threshold is 0.4.
4.2.2 Detector and Embedding.
We use SimpleTrack to extract the location features and appearance features of objects. For SimpleTrack, the backbone is DLA-34 which initializes weights with COCO-pretained model. The training schedule is 30 epochs on the combination of MOT17, CrowdHuman, and other datasets mentioned above. The input image size is 1088
608. Rotation, scaling and color jittering are adopted as data augmentation techniques during our training phase. The model is trained on 4 NVIDIA TITAN RTX with a batch size of 32. The optimizer is Adam and the initial learning rate is set to which decays to in the 20 epoch. The total training time is about 25 hours. FPS is measured with a single NVIDIA RTX2080Ti and the batch size is set to 1.4.3 Ablation Studies
4.3.1 Ablation on SimpleTrack.
The innovation of SimpleTrack is mainly composed of bottom-up decoupling, EG similarity matrix and tracking retrieval. We conduct ablation experiments on the MOT17 validation set for these three modules. The results are shown in Table 1. It can be observed that adding bottom-up decoupling to Fairmot increases IDF1 and MOTA. In addition, after replacing the similarity matrix of JDE-based methods with the EG matrix, the strategy improves IDF1 from 76.1 to 78.1, MOTA from 71.4 to 72.5, HOTA from 60.2 to 61.5 and decreases IDs from 451 to 186. After further adding the tracking retrieval mechanism, the IDF1 metric increases from 78.1 to 78.5, HOTA from 61.5 to 61.7 and IDs metric decreases from 186 to 182. These results prove that the modules proposed in SimpleTrack are necessary and effective.
Model Settings | Evaluation indecators | |||||||||||||||
BU-D | EG | TR |
|
|
|
|
|
|
|
|||||||
75.6 | 71.1 | - | 327 | - | - | - | ||||||||||
|
76.1 | 71.4 | 60.2 | 451 | 3319 | 11655 | 19.7 | |||||||||
|
|
78.1 | 72.5 | 61.5 | 186 | 3260 | 11430 | 24 | ||||||||
|
|
|
78.5 | 72.5 | 61.7 | 182 | 3212 | 11456 | 23.8 |
4.3.2 Analysis on the Similarity Matrix.
We employ different distance matrices as the similarity measure and evaluate their data association ability on the half validation set of MOT17. It can be obtained from Table 2, only using the Giou or embedding matrix for data association does not perform well. Besides, the table show that the combination of embedding matrix and IoU matrix can improve the association effect but reduce the result of MOTA. Compared with the IoU matrix used in detection-based methods, our EG matrix improves the IDF1 from 75.7 to 78.5, HOTA from 60.4 to 61.7 and decreases IDs from 285 to 182. Compared with the embedding and motion matrix used in JDE-based methods, our EG matrix improves both the MOT metrics and tracking speed.
Similarity Matrix |
|
|
|
|
|
|
|
|||||||
IoU | 75.7 | 72.5 | 60.4 | 285 | 3510 | 11048 | 25 | |||||||
GioU | 66.4 | 70.4 | 54.8 | 378 | 4631 | 10956 | 23.6 | |||||||
Embedding | 64.1 | 65 | 53.4 | 749 | 6120 | 12012 | 24.2 | |||||||
Embedding and Motion | 76.1 | 71.4 | 60.2 | 451 | 3319 | 11655 | 19.7 | |||||||
Embedding and IoU | 77.2 | 72.3 | 61.4 | 263 | 2560 | 12144 | 24 | |||||||
Embedding and GioU | 78.5 | 72.5 | 61.7 | 182 | 3212 | 11456 | 23.8 |
4.3.3 Applications on other JDE-based Trackers.
We apply our EG matrix on 5 different JDE-based trackers, including JDE[15], FairMOT[17], CSTrack[11], TraDes[26] and QuasiDense[14]. Among these trackers, JDE, FairMOT, CSTrack, TraDes merge the motion and Re-ID similarity and the first three methods follow the same fusion strategy. QuasiDense uses Re-ID similarity alone. It can be observed from Table 3 that using the EG matrix instead of the EM matrix can enhance the tracking performance and improve the tracking speed. Taking the JDE[15] method as an example, only using EG matrix to replace the EM matrix can improve the HOTA from 50.1 to 50.9, IDF1 from 63 to 64.4, MOTA from 59.3 to 59.5, FPS from 16.64 to 21.29 and decreases the IDs from 621 to 558. Combined with the BYTE strategy, our EG matrix still improves the HOTA from 50.4 to 50.9, IDF1 from 64.1 to 64.4, FPS from 18.52 to 25.48 and decreases the IDs from 437 to 388.
|
||||||||||||||
Method |
|
|
|
|
|
|
|
|||||||
JDE[15] |
|
|
50.1 | 63 | 59.3 | 621 | 16.64 | |||||||
|
|
50.9 | 64.4 | 59.5 | 558 | 21.29 | ||||||||
|
|
50.4 | 64.1 | 60.2 | 437 | 18.52 | ||||||||
|
|
50.9 | 64.8 | 60.1 | 388 | 25.48 | ||||||||
|
||||||||||||||
FairMOT[17] |
|
|
57 | 72.4 | 69.1 | 372 | 21.01 | |||||||
|
|
57.5 | 73.3 | 69.5 | 236 | 25.18 | ||||||||
|
|
- | 74.2 | 70.4 | 232 | - | ||||||||
|
|
58.5 | 74.5 | 70.6 | 188 | 24.7 | ||||||||
|
||||||||||||||
CSTrack[11] |
|
|
58.7 | 72.0 | 67.9 | 423 | 20.39 | |||||||
|
|
59.3 | 73.0 | 68.2 | 322 | 24.3 | ||||||||
|
|
59.8 | 73.9 | 69.2 | 298 | 20.72 | ||||||||
|
|
60.0 | 73.8 | 69.6 | 249 | 24.25 | ||||||||
|
||||||||||||||
TraDes[26] |
|
|
58.6 | 71.7 | 68.3 | 293 | 15.8 | |||||||
|
|
58.4 | 71.2 | 68.9 | 263 | 16.22 | ||||||||
|
|
59.0 | 71.5 | 68.5 | 483 | 16.5 | ||||||||
|
||||||||||||||
QuasiDense[14] |
|
|
56.2 | 67.7 | 67.1 | 386 | 4.1 | |||||||
|
|
58.5 | 71.9 | 67.4 | 295 | 4.8 | ||||||||
|
|
57.9 | 70.9 | 67.5 | 252 | 4.8 | ||||||||
|
4.3.4 The Accuracy Compared with other Association Methods.
We compare SimpleTrack with other association methods, including the recent SOTA algorithm BYTE and the tracking algorithm used in JDE-based methods[17, 15, 11, 12]. As shown in Table 4, SimpleTrack improves the IDF1 metric of JDE from 76.1 to 78.5, MOTA from 71.4 to 72.5, HOTA from 60.2 to 61.7 and decreases IDs from 451 to 182. Compared with BYTE, we can see that SimpleTrack improves the IDF1 from 75.7 to 78.5, HOTA from 60.4 to 61.7, and decreases IDs from 285 to 182. These demonstrate that our tracking method is more effective than the JDE strategy, and it can improve the accuracy of data association compared to the BYTE strategy.
Track Method |
|
|
|
|
|
|
|
|||||||
JDE | 76.1 | 71.4 | 60.2 | 451 | 3319 | 11655 | 19.7 | |||||||
BYTE | 75.7 | 72.5 | 60.4 | 285 | 3510 | 11048 | 25 | |||||||
SimpleTrack(Ours) | 78.5 | 72.5 | 61.7 | 182 | 3212 | 11456 | 23.8 |
4.3.5 The Speed Compared with other Association Methods.
From Table 4 and Table 2, we can observe that our SimpleTrack algorithm utilizes the embedding information but is still nearly 20% faster than JDE’s tracking strategy. A more detailed comparison of different video sequences can be observed in Figure 4 (a). It can be observed that our tracking algorithm is only slightly slower than BYTE which does not utilize the embedding information. According to Figure 4 (b), we can see time-consuming of the main modules in the tracking phase. It shows that the JDE-based tracking strategy spends a lot of time in fusing the embedding and motion information, which is represented by the orange dotted square in Figure 4 (b). For the EG matrix, we only need to calculate the Giou distance and add it with the embedding distance. The time consumption is represented by the orange dotted star in Figure 4 (b).

4.3.6 Comparison with Preceding SOTAs.
In this part, we compare the performance of SimpleTrack with preceding SOTA methods on MOT17 and MOT20. The results are reported in Table 5 and Table 6, respectively. As shown in these two tables, SimpleTrack has come out among the top in various metrics and surpassed the contrasted counterparts by large margins, especially on the HOTA, IDF1 and IDS metrics. Besides, compared with other MOT tracking methods, SimpleTrack has obvious speed advantage.
4.3.7 Visualization Results.
We show some scenarios that are prone to identity switching in Figure 2 which contains 3 sequences from the half validation set of MOT17. We use different tracking strategies to generate the visualization results. It can be observed that SimpleTrack can effectively deal with the identity switching problem caused by the occlusion of the tracking targets. In addition, some tracking examples on the MOT17 test datasets are shown by Figure 5.
Method |
|
|
|
|
|
|
|
|||||||
TraDes[26] | 52.7 | 63.9 | 69.1 | 3555 | 20892 | 150060 | 17.5 | |||||||
MAT | 53.8 | 63.1 | 69.5 | 2844 | 30660 | 138741 | 9.0 | |||||||
QuasiDense[14] | 53.9 | 66.3 | 68.7 | 3378 | 26589 | 146643 | 20.3 | |||||||
SOTMOT[40] | - | 71.9 | 71.0 | 5184 | 39537 | 118983 | 16.0 | |||||||
TransCenter[41] | 54.5 | 62.2 | 73.2 | 4614 | 23112 | 123738 | 1.0 | |||||||
PermaTrackPr[42] | 55.5 | 68.9 | 73.8 | 3699 | 28998 | 115104 | 11.9 | |||||||
TransTrack[29] | 54.1 | 63.5 | 75.2 | 3603 | 50157 | 86442 | 10.0 | |||||||
FUFET[24] | 57.9 | 68.0 | 76.2 | 3237 | 32796 | 98475 | 6.8 | |||||||
FairMOT[17] | 59.3 | 72.3 | 73.7 | 3303 | 27507 | 117477 | 18.9 | |||||||
CSTrack[11] | 59.3 | 72.6 | 74.9 | 3567 | 23847 | 114303 | 15.8 | |||||||
Semi-TCL[43] | 59.8 | 73.2 | 73.3 | 2790 | 22944 | 124980 | - | |||||||
ReMOT[44] | 59.7 | 72.0 | 77.0 | 2853 | 33204 | 93612 | 1.8 | |||||||
CrowdTrack[45] | 60.3 | 73.6 | 75.6 | 2544 | 25950 | 109101 | - | |||||||
CorrTracker[25] | 60.7 | 73.6 | 76.5 | 3369 | 29808 | 99510 | 15.6 | |||||||
RelationTrack[18] | 61.0 | 74.7 | 73.8 | 1374 | 27999 | 118623 | 8.5 | |||||||
SimpleTrack(Ours) | 61.0 | 75.7 | 74.1 | 1500 | 17379 | 127053 | 22.53 | |||||||
SimpleTrack(Ours)* | 61.6 | 76.3 | 75.3 | 1260 | 22317 | 116010 | - | |||||||
* indicates adding linear interpolation |
||||||||||||||
indicates JDE-based methods |
Method |
|
|
|
|
|
|
|
|||||||
MLT[46] | 43.2 | 54.6 | 48.9 | 2187 | 45660 | 216803 | 3.7 | |||||||
FairMOT[17] | 54.6 | 67.3 | 61.8 | 5243 | 103440 | 88901 | 13.2 | |||||||
TransCenter[41] | - | 50.4 | 61.9 | 4653 | 45895 | 146347 | 1.0 | |||||||
TransTrack[29] | 48.5 | 59.4 | 65.0 | 3608 | 27197 | 150197 | 7.2 | |||||||
Semi-TCL[43] | 55.3 | 70.1 | 65.2 | 4139 | 61209 | 114709 | - | |||||||
CorrTracker[25] | - | 69.1 | 65.2 | 5183 | 79429 | 95855 | 8.5 | |||||||
CSTrack[11] | 54.0 | 68.6 | 66.6 | 3196 | 25404 | 144358 | 4.5 | |||||||
GSDT[43] | 53.6 | 67.5 | 67.1 | 3131 | 31913 | 135409 | 0.9 | |||||||
SiamMOT[12] | - | 67.8 | 70.7 | - | 22689 | 125039 | 6.7 | |||||||
RelationTrack[18] | 56.5 | 70.5 | 67.2 | 4243 | 61134 | 104597 | 2.7 | |||||||
SOTMOT[40] | - | 71.4 | 68.6 | 4209 | 57064 | 101154 | 8.5 | |||||||
SimpleTrack(Ours) | 56.6 | 69.6 | 70.6 | 2,434 | 18400 | 131209 | 7.0 | |||||||
SimpleTrack(Ours)* | 57.6 | 70.2 | 72.6 | 1785 | 25515 | 114463 | - | |||||||
* indicates adding linear interpolation | ||||||||||||||
indicates JDE-based methods |

5 Conclusions
We proposed a simple tracking framework called SimpleTrack for data assocaition and multi-object tracking. SimpleTrack was implemented by developing a simple and effective similarity matrix (called EG matrix), which combines the embedding and Giou distance. The proposed EG matrix can improve not only the tracking effect but also the speed of JDE-based tracking methods. Moreover, we also proposed a bottom-up feature fusion module for decoupling Reid and detection tasks, and design a tracking strategy for JDE architecture by combining the BYTE strategy and EG matrix. The results show that SimpleTrack is very competitive, and we hope that the EG matrix will facilitate the development of the JDE-based methods.
References
- [1] Vandenhende, S., Georgoulis, S., Van Gansbeke, W., Proesmans, M., Dai, D., Van Gool, L.: Multi-task learning for dense prediction tasks: A survey. IEEE transactions on pattern analysis and machine intelligence (2021)
- [2] Bewley, A., Ge, Z., Ott, L., Ramos, F., Upcroft, B.: Simple online and realtime tracking. In: 2016 IEEE international conference on image processing (ICIP), IEEE (2016) 3464–3468
- [3] Bochinski, E., Eiselein, V., Sikora, T.: High-speed tracking-by-detection without using image information. In: 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS), IEEE (2017) 1–6
- [4] Liu, Q., Chu, Q., Liu, B., Yu, N.: Gsm: Graph similarity model for multi-object tracking. In: IJCAI. (2020) 530–536
-
[5]
Specker, A., Stadler, D., Florin, L., Beyerer, J.:
An occlusion-aware multi-target multi-camera tracking system.
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (2021) 4173–4182
- [6] Tang, S., Andriluka, M., Andres, B., Schiele, B.: Multiple people tracking by lifted multicut and person re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2017) 3539–3548
- [7] Wojke, N., Bewley, A., Paulus, D.: Simple online and realtime tracking with a deep association metric. In: 2017 IEEE international conference on image processing (ICIP), IEEE (2017) 3645–3649
-
[8]
Xiao, B., Wu, H., Wei, Y.:
Simple baselines for human pose estimation and tracking.
In: Proceedings of the European conference on computer vision (ECCV). (2018) 466–481 - [9] Xu, J., Cao, Y., Zhang, Z., Hu, H.: Spatial-temporal relation networks for multi-object tracking. In: Proceedings of the IEEE/CVF international conference on computer vision. (2019) 3988–3998
- [10] Zhang, Y., Sun, P., Jiang, Y., Yu, D., Yuan, Z., Luo, P., Liu, W., Wang, X.: Bytetrack: Multi-object tracking by associating every detection box. arXiv preprint arXiv:2110.06864 (2021)
- [11] Liang, C., Zhang, Z., Lu, Y., Zhou, X., Li, B., Ye, X., Zou, J.: Rethinking the competition between detection and reid in multi-object tracking. arXiv preprint arXiv:2010.12138 (2020)
- [12] Liang, C., Zhang, Z., Zhou, X., Li, B., Lu, Y., Hu, W.: One more check: Making” fake background” be tracked again. arXiv preprint arXiv:2104.09441 (2021)
- [13] Lu, Z., Rathod, V., Votel, R., Huang, J.: Retinatrack: Online single stage joint detection and tracking. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. (2020) 14668–14678
- [14] Pang, J., Qiu, L., Li, X., Chen, H., Li, Q., Darrell, T., Yu, F.: Quasi-dense similarity learning for multiple object tracking. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. (2021) 164–173
- [15] Wang, Z., Zheng, L., Liu, Y., Li, Y., Wang, S.: Towards real-time multi-object tracking. In: European Conference on Computer Vision, Springer (2020) 107–122
- [16] Zhang, Y., Wang, C., Wang, X., Liu, W., Zeng, W.: Voxeltrack: Multi-person 3d human pose estimation and tracking in the wild. arXiv preprint arXiv:2108.02452 (2021)
- [17] Zhang, Y., Wang, C., Wang, X., Zeng, W., Liu, W.: Fairmot: On the fairness of detection and re-identification in multiple object tracking. International Journal of Computer Vision 129 (2021) 3069–3087
- [18] Yu, E., Li, Z., Han, S., Wang, H.: Relationtrack: Relation-aware multiple object tracking with decoupled representation. IEEE Transactions on Multimedia (2022)
- [19] Chen, L., Ai, H., Zhuang, Z., Shang, C.: Real-time multiple people tracking with deeply learned candidate selection and person re-identification. In: 2018 IEEE international conference on multimedia and expo (ICME), IEEE (2018) 1–6
- [20] Stadler, D., Beyerer, J.: Modelling ambiguous assignments for multi-person tracking in crowds. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. (2022) 133–142
- [21] Milan, A., Leal-Taixé, L., Reid, I., Roth, S., Schindler, K.: Mot16: A benchmark for multi-object tracking. arXiv preprint arXiv:1603.00831 (2016)
- [22] Dendorfer, P., Rezatofighi, H., Milan, A., Shi, J., Cremers, D., Reid, I., Roth, S., Schindler, K., Leal-Taixé, L.: Mot20: A benchmark for multi object tracking in crowded scenes. arXiv preprint arXiv:2003.09003 (2020)
- [23] Saleh, F., Aliakbarian, S., Rezatofighi, H., Salzmann, M., Gould, S.: Probabilistic tracklet scoring and inpainting for multiple object tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (2021) 14329–14339
- [24] Shan, C., Wei, C., Deng, B., Huang, J., Hua, X.S., Cheng, X., Liang, K.: Tracklets predicting based adaptive graph tracking. arXiv preprint arXiv:2010.09015 (2020)
- [25] Wang, Q., Zheng, Y., Pan, P., Xu, Y.: Multiple object tracking with correlation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (2021) 3876–3886
- [26] Wu, J., Cao, J., Song, L., Wang, Y., Yang, M., Yuan, J.: Track to detect and segment: An online multi-object tracker. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. (2021) 12352–12361
- [27] Zhou, X., Koltun, V., Krähenbühl, P.: Tracking objects as points. In: European Conference on Computer Vision, Springer (2020) 474–490
- [28] Shao, S., Zhao, Z., Li, B., Xiao, T., Yu, G., Zhang, X., Sun, J.: Crowdhuman: A benchmark for detecting human in a crowd. arXiv preprint arXiv:1805.00123 (2018)
- [29] Sun, P., Cao, J., Jiang, Y., Zhang, R., Xie, E., Yuan, Z., Wang, C., Luo, P.: Transtrack: Multiple object tracking with transformer. arXiv preprint arXiv:2012.15460 (2020)
- [30] Zeng, F., Dong, B., Wang, T., Zhang, X., Wei, Y.: Motr: End-to-end multiple-object tracking with transformer. arXiv preprint arXiv:2105.03247 (2021)
- [31] Ess, A., Leibe, B., Schindler, K., Van Gool, L.: A mobile vision system for robust multi-person tracking. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, IEEE (2008) 1–8
- [32] Zhang, S., Benenson, R., Schiele, B.: Citypersons: A diverse dataset for pedestrian detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2017) 3213–3221
- [33] Dollár, P., Wojek, C., Schiele, B., Perona, P.: Pedestrian detection: A benchmark. In: 2009 IEEE conference on computer vision and pattern recognition, IEEE (2009) 304–311
- [34] Xiao, T., Li, S., Wang, B., Lin, L., Wang, X.: Joint detection and identification feature learning for person search. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2017) 3415–3424
- [35] Zheng, L., Zhang, H., Sun, S., Chandraker, M., Yang, Y., Tian, Q.: Person re-identification in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 1367–1376
- [36] Jonathon Luiten, A.H.: Trackeval. https://github.com/JonathonLuiten/TrackEval (2020)
-
[37]
Bernardin, K., Elbs, A., Stiefelhagen, R.:
Multiple object tracking performance metrics and evaluation in a
smart room environment.
In: Sixth IEEE International Workshop on Visual Surveillance, in conjunction with ECCV. Volume 90., Citeseer (2006)
- [38] Ristani, E., Solera, F., Zou, R., Cucchiara, R., Tomasi, C.: Performance measures and a data set for multi-target, multi-camera tracking. In: European conference on computer vision, Springer (2016) 17–35
- [39] Luiten, J., Osep, A., Dendorfer, P., Torr, P., Geiger, A., Leal-Taixé, L., Leibe, B.: Hota: A higher order metric for evaluating multi-object tracking. International journal of computer vision 129 (2021) 548–578
- [40] Han, S., Huang, P., Wang, H., Yu, E., Liu, D., Pan, X.: Mat: Motion-aware multi-object tracking. Neurocomputing (2022)
- [41] Xu, Y., Ban, Y., Delorme, G., Gan, C., Rus, D., Alameda-Pineda, X.: Transcenter: Transformers with dense queries for multiple-object tracking. arXiv preprint arXiv:2103.15145 (2021)
- [42] Tokmakov, P., Li, J., Burgard, W., Gaidon, A.: Learning to track with object permanence. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. (2021) 10860–10869
-
[43]
Wang, Y., Kitani, K., Weng, X.:
Joint object detection and multi-object tracking with graph neural networks.
In: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE (2021) 13708–13715 - [44] Yang, F., Chang, X., Sakti, S., Wu, Y., Nakamura, S.: Remot: A model-agnostic refinement for multiple object tracking. Image and Vision Computing 106 (2021) 104091
- [45] Stadler, D., Beyerer, J.: On the performance of crowd-specific detectors in multi-pedestrian tracking. In: 2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE (2021) 1–12
- [46] Zhang, Y., Sheng, H., Wu, Y., Wang, S., Ke, W., Xiong, Z.: Multiplex labeling graph for near-online tracking in crowded scenes. IEEE Internet of Things Journal 7 (2020) 7892–7902