Log In Sign Up

SimpleTrack: Rethinking and Improving the JDE Approach for Multi-Object Tracking

by   Jiaxin Li, et al.

Joint detection and embedding (JDE) based methods usually estimate bounding boxes and embedding features of objects with a single network in Multi-Object Tracking (MOT). In the tracking stage, JDE-based methods fuse the target motion information and appearance information by applying the same rule, which could fail when the target is briefly lost or blocked. To overcome this problem, we propose a new association matrix, the Embedding and Giou matrix, which combines embedding cosine distance and Giou distance of objects. To further improve the performance of data association, we develop a simple, effective tracker named SimpleTrack, which designs a bottom-up fusion method for Re-identity and proposes a new tracking strategy based on our EG matrix. The experimental results indicate that SimpleTrack has powerful data association capability, e.g., 61.6 HOTA and 76.3 IDF1 on MOT17. In addition, we apply the EG matrix to 5 different state-of-the-art JDE-based methods and achieve significant improvements in IDF1, HOTA and IDsw metrics, and increase the tracking speed of these methods by about 20


page 3

page 14


Multi-object Tracking with Tracked Object Bounding Box Association

The CenterTrack tracking algorithm achieves state-of-the-art tracking pe...

Synthetic Data Are as Good as the Real for Association Knowledge Learning in Multi-object Tracking

Association, aiming to link bounding boxes of the same identity in a vid...

Faster object tracking pipeline for real time tracking

Multi-object tracking (MOT) is a challenging practical problem for visio...

Sparse Message Passing Network with Feature Integration for Online Multiple Object Tracking

Existing Multiple Object Tracking (MOT) methods design complex architect...

Towards Real-Time Multi-Object Tracking

Modern multiple object tracking (MOT) systems usually follow the trackin...

ByteTrack: Multi-Object Tracking by Associating Every Detection Box

Multi-object tracking (MOT) aims at estimating bounding boxes and identi...

FlowMOT: 3D Multi-Object Tracking by Scene Flow Association

Most end-to-end Multi-Object Tracking (MOT) methods face the problems of...

1 Introduction

Multi-object tracking (MOT), aiming to estimate the locations and identity of multiple targets in a video sequence, is a fundamentally challenging task in computer vision

[1]. Recently, the Intersection over Union (IoU) and Hungarian method are commonly used in the tracking phase among many tracking-by-detection paradigm[2, 3, 4, 5, 6, 7, 8, 9, 10]. However, when the target is occluded or lost for a period of time, it is difficult to retrieve the correct identity only using the IoU distance. As a result, the identity switching of targets occurs from time to time. To alleviate this problem, many methods start to introduce the Re-identity feature of targets. Among them, the JDE-based methods[11, 12, 13, 14, 15, 16, 17] have become popular due to their simplicity and efficiency.

In the part of the data association, the accuracy of similarity measurement determines the tracking performance. Most detection-based methods use the IoU distance as the similarity matrix in the cascade matching strategy, while JDE-based methods fuse the motion information and appearance information as the similarity matrix for the linear assignment in the first matching and use the IoU distance in the next matching. However, none of these existing methods is the best expression of the similarity matrix according to our experiments.

Figure 1: The Example of heatmaps for different association matrices in frame 560 of MOT17 sequence 11. (a) shows our EG matrix, which combines the embedding cosine distance and the Giou distance. (b) shows the IoU distance matrix, i.e. the detection-based methods. (c) shows the EM matrix, which usually combines the motion distance and the embedding cosine distance, i.e. the JDE-based methods. In these heatmaps, the red cells indicates THAT the similarity distance between detection targets and tracking targets is farther, and the blue cells show THAT the similarity distance is closer.

When objects are occluded due to interlacing, it will produce confusing sets, which are difficult to allocate correctly, e.g. the set {det4, det7, track4, track10} in Figure 1 (a) and (b), and the set {det4, det7, track4, track9} in Figure 1 (c). When assigning these confusing sets, the inaccurate similarity distance leads to tracking failure. Based on the Hungarian method, the IoU distance matrix tends to match det4 with track4 and det7 with track10, and the EM distance matrix tends to match det4 with track4 and det7 with track9. Both of them lead to the target identity switching as shown in MOT17-Seq-11 of Figure 2

. The principal reason for these matching failure is the inaccurate prediction from the Kalman filter as the time of target loss becomes longer. Clearly, this results in inaccurate IoU distance and motion information distance, which leads to the problem of linear allocation errors.

To solve this problem, we propose the EG matrix which utilizes the embedding cosine distance for long-range tracking of targets and the Giou distance for limiting the matching range of embedding. To illustrate the robustness of the EG matrix, we apply it to 5 different JDE-based methods. As can be seen in Section 4.3, our implements obtain improvements in MOT metrics, including tracking speed, HOTA, IDF1, and IDsw metrics.

To further explore the good property of the EG matrix, we propose a simple tracking framework named SimpleTrack. In this framework, we design a bottom-up branch to represent Re-id features. Different from the fusion method of detection features, it pays more attention to the high-level semantic layers. For the tracking part of SimpleTrack, we propose a novel tracking retrieval mechanism and design a new tracking strategy based on our EG matrix. The experimental results show that our tracking strategy can surpass the JDE-based methods in most metrics, including tracking speed. Compared with the current SOTA method BYTE, our tracking strategy can also improve the performances of HOTA, IDF1, and IDsw metrics.

Figure 2: Robustness of our tracking strategy compared to BYTE and JDE-based methods. Boxes with the same color indicate that the tracking targets have the same identity, IDs indicates that the tracking targets have switched their identities. The checkmark indicates that the identity of the target has not changed.

2 Related Work

2.1 Joint Detection and Embedding

JDE-based methods typically employ a single network to directly predict detection and appearance features[11, 12, 13, 14, 15, 16, 17]

. In general, these methods employ the single backbone to predict both object bounding boxes and appearance features. However, the competitive relationship between detection and identification hurts the optimization procedure in the multi-task learning of object detection and appearance feature extraction.

Recently, to tackle this problem, CSTrack[11] first designs a decoupling module to enhance the learned representation for both object detection and appearance identification. RelationTrack[18] uses a channel attention mechanism to decouple detection and Re-identity. Different from CSTrack and RelationTrack, the decoupling strategy adopted in our SimpleTrack focuses on the essence of the appearance feature. We start decoupling from the feature layer fusion of the network. In contrast to the detection feature fusion, we adopt a bottom-up fusion method.

2.2 Similarity Matrices

Location, motion and appearance are the most common cues in Multi-object tracking. They are also combined together for the linear assignment. Detection-based methods[10] utilize the IoU distance as the similarity matrix and the tracking accuracy mainly depends on the detector. SORT[2] fuses position and motion cues as the similarity matrix, which can achieve good results in short-range matching. DeepSORT[7] improves the long-range tracking ability of trackers by merging appearance and motion cues, which is usually used in JDE-based methods.

All these methods use location cue or fuse appearance and motion information as the similarity matrix. However, we design the similarity matrix combined with appearance and location information and use the Giou distance matrix as the location cue instead of the common IoU matrix.

2.3 Tracking Strategy

The assignment problem of target tracking and detection can be solved by Hungarian Algorithm based on different similarity matrices. SORT associates the detection objects with the tracking objects by once matching. DeepSORT adopts a cascade matching method that reduces unmatched tracking targets. MOTDT[19] first uses the appearance similarity matrix and the IoU distance matrix as the similarity matrix for cascade matching respectively.

Recently, BYTETrack[10] proposes to use low-confidence detection results for secondary matching, which reduces the problem of target detection failure due to occlusion. Thereby, the occurrence of long-range tracking could be declined, making the linear assignment based on the IoU distance matrix more effective. MAA[20] adopts different strategies for the blurred detection targets and tracking targets in the similarity matrix. The method can alleviate the inaccuracy of similarity distance caused by the ambiguous targets. Both of them are aim to make up for the shortcomings of the similarity matrix. Based on the idea of BYTE[10], we redesign the similarity matrix for the JDE-based method and construct a new matching strategy.

3 SimpleTrack

In this section, we present the technical details of SimpleTrack, as illustrated in Figure 3. It is composed of the feature decoupling, the similarity matrix as well as the tracking strategy.

Figure 3: The overall pipeline of SimpleTrack. The input image is first fed to a backbone network to extract high-resolution feature maps. Then we use different feature fusion methods for detection and Re-identity separately, and combine the embedding and Giou distance matrix as the similarity matrix. At the end of the association phase, the tracking retrieval mechanism is used to recover the undetected targets.

3.1 Feature Decoupling

We adopt DLA-34 as backbone in order to strike a good balance between accuracy and speed. For feature decoupling, we employ different feature fusion methods for detection and ReID representation. As illustrated in Figure 3, for the detection branch, the feature fusion method still adopts the structure of IDA-up in Fairmot[17]. We call it the up-to-bottom fusion method based on low-level feature maps and continuously fusing higher-level feature maps.

However, Re-ID features tend to learn higher-level semantic features to distinguish different features among homogeneous objects. Therefore, we take a simple bottom-up approach to fusing feature maps. Denote the input feature maps by , where is the number of feature layers of different resolutions extracted by the backbone network. Then, the process of bottom-up fusion method can be expressed as


where represents an upsampling operation composed of the deformable convolution and the deconvolution, denotes a convolution layer for changing channels of features, represents the Sigmoid activation layer.

It could be observed from Equation (1) that the fusion process is from bottom to top, and the previously fused feature map guides the lower-level feature map until the final fusion result is obtained. As will be shown by the experimental results in Section 4, the computational cost required by this fusion method is minimal.

3.2 Embedding and Giou Matrix

The similarity matrix is usually constructed from location, motion and appearance information. Let , , denote the location distance matrix, the motion distance matrix and the appearance distance matrix respectively. We fuse and as the similarity matrix called the EG matrix. And, can be represented as


where and represent the bounding boxes of the tracking objects and the bounding boxes of the detection objects respectively, and is the minimum enclosing rectangle sets of the above bounding boxes.

can be represented as


where and

represent different appearance embedding vectors.

Note that the matrix in Equation (2) is actually the Giou distance matrix and that the matrix in Equation (3) defines the Cosine distance matrix. Then, the Embedding and Giou matrix, which is also denoted as , can be represented as


where and

represent two hyperparameters,

denotes Giou distance matrix and .

3.3 Tracking Strategy in SimpleTrack

Inspired by BYTE[10], we develop a tracking strategy based our EG matrix. As shown in Algorithm 1, we follow the idea of secondary matching with low confidence detection adopted in BYTE, and use the EG matrix to replace the similarity matrix in the cascade matching. In addition, after the secondary matching, we utilize the cosine distance to retrieve the unmatched tracklets.

In the retrieving process presented by red texts in Algorithm 1, we use Kalman filter to predict the center point position of the unmatched tracking targets. In order to compensate for the drift of Kalman filter, we select the appearance embedding vectors in the 33 range around the prediction center point. Afterward, the minimum cosine distance between these vectors and the embedding vector memorized in the unmatched tracking object can be calculated. If the distance is less than the threshold, the tracklet is retrieved. By tracking retrieval mechanism, we can recover the occluded(failed) detection boxes by using the predictions of Kalman filter.

Input: A video sequence ; object detector ; Kalman Filter ; detection score threshold , ; tracking score threshold ; tracking retrieval threshold
Output: Tracks of the video
1 for frame in  do
2       ;
3       ;
4       ;
5       for  in  do
6             if  then
7                   {};
9            if  then
10                   {};
13      for  in  do
14             ;
      // first association with EG matrix
16       Associate and using EG matrix;
17       remaining object boxes from ;
18       remaining tracks from ;
       // second association with EG matrix
19       Associate and using EG matrix;
20       remaining tracks from ;
       // tracking retrieval
21       for  in  do
22             Find surrounding embedding vectors with the center point of in the detection frame;
23             Select the most similar appearance embedding vector

based on the cosine similarity;

24             if  then
25                   ;
       // delete unmatched tracks
28       ;
       // initialize new tracks
29       for  in  do
30             if  then
31                   ;
35final ;
return ;
Algorithm 1 Pseudo-code of SimpleTrack

4 Experiments

4.1 Datasets and Metrics

4.1.1 Datasets.

We evaluate SimpleTrack on private detection tracks of MOT17[21] and MOT20[22] datasets. The former contains 14 different video sequences for multi-target tracking recorded by fixed or moving cameras. The latter consists of 8 video sequences with fixed camera focusing on tracking in very crowded scenes, 4 for training and testing each. For ablation studies, we follow [23, 24, 25, 26, 27] and split the train set into two parts for ablative experiments as the annotations of the test split are not publicly available. We fuse the CrowdHuman[28] and MOT17 half as the training dataset for ablation experiments following [29, 26, 30, 10, 27]. We add the ETH[31], CityPerson[32], CalTech[33], CUHK-SYSU[34] and PRW[35] datasets for training following [11, 15, 17] when testing on the test set of MOT17.

4.1.2 Evaluation Metrics.

To evaluate tracking performance, we use TrackEval[36] to evaluate all metrics, including MOTA[37], IDF1[38], false positives (FP), false negatives (FN), identity switches (IDSW), and recently proposed HOTA[39]. HOTA can comprehensively evaluate the performance of detection and data association. IDF1 focuses more on the association performance and MOTA evaluates the detector ability and focuses more on detection performance.

4.2 Implementation Details

4.2.1 Tracker.

In the tracking phase, the default high detection score threshold is 0.3, the low threshold is 0.2, the trajectory initialization score is 0.6, and the trajectory retrieval score is 0.1, unless otherwise specified. In the linear assignment step, for the high-confidence detection, the assignment threshold is 0.8, and for the low-confidence detection, the assignment threshold is 0.4.

4.2.2 Detector and Embedding.

We use SimpleTrack to extract the location features and appearance features of objects. For SimpleTrack, the backbone is DLA-34 which initializes weights with COCO-pretained model. The training schedule is 30 epochs on the combination of MOT17, CrowdHuman, and other datasets mentioned above. The input image size is 1088

608. Rotation, scaling and color jittering are adopted as data augmentation techniques during our training phase. The model is trained on 4 NVIDIA TITAN RTX with a batch size of 32. The optimizer is Adam and the initial learning rate is set to which decays to in the 20 epoch. The total training time is about 25 hours. FPS is measured with a single NVIDIA RTX2080Ti and the batch size is set to 1.

4.3 Ablation Studies

4.3.1 Ablation on SimpleTrack.

The innovation of SimpleTrack is mainly composed of bottom-up decoupling, EG similarity matrix and tracking retrieval. We conduct ablation experiments on the MOT17 validation set for these three modules. The results are shown in Table 1. It can be observed that adding bottom-up decoupling to Fairmot increases IDF1 and MOTA. In addition, after replacing the similarity matrix of JDE-based methods with the EG matrix, the strategy improves IDF1 from 76.1 to 78.1, MOTA from 71.4 to 72.5, HOTA from 60.2 to 61.5 and decreases IDs from 451 to 186. After further adding the tracking retrieval mechanism, the IDF1 metric increases from 78.1 to 78.5, HOTA from 61.5 to 61.7 and IDs metric decreases from 186 to 182. These results prove that the modules proposed in SimpleTrack are necessary and effective.

Model Settings Evaluation indecators
75.6 71.1 - 327 - - -
76.1 71.4 60.2 451 3319 11655 19.7
78.1 72.5 61.5 186 3260 11430 24
78.5 72.5 61.7 182 3212 11456 23.8
Table 1: Ablation experiment on SimpleTrack. denotes adding this module to the baseline which is Fairmot. BU-D, EG and TR stand for bottom-up decoupling, EG similarity matrix and tracking retrieval strategy respectively. The best results are show in bold.

4.3.2 Analysis on the Similarity Matrix.

We employ different distance matrices as the similarity measure and evaluate their data association ability on the half validation set of MOT17. It can be obtained from Table 2, only using the Giou or embedding matrix for data association does not perform well. Besides, the table show that the combination of embedding matrix and IoU matrix can improve the association effect but reduce the result of MOTA. Compared with the IoU matrix used in detection-based methods, our EG matrix improves the IDF1 from 75.7 to 78.5, HOTA from 60.4 to 61.7 and decreases IDs from 285 to 182. Compared with the embedding and motion matrix used in JDE-based methods, our EG matrix improves both the MOT metrics and tracking speed.

Similarity Matrix
IoU 75.7 72.5 60.4 285 3510 11048 25
GioU 66.4 70.4 54.8 378 4631 10956 23.6
Embedding 64.1 65 53.4 749 6120 12012 24.2
Embedding and Motion 76.1 71.4 60.2 451 3319 11655 19.7
Embedding and IoU 77.2 72.3 61.4 263 2560 12144 24
Embedding and GioU 78.5 72.5 61.7 182 3212 11456 23.8
Table 2: Data association comparison of different similarity matrices. The best results are show in bold.

4.3.3 Applications on other JDE-based Trackers.

We apply our EG matrix on 5 different JDE-based trackers, including JDE[15], FairMOT[17], CSTrack[11], TraDes[26] and QuasiDense[14]. Among these trackers, JDE, FairMOT, CSTrack, TraDes merge the motion and Re-ID similarity and the first three methods follow the same fusion strategy. QuasiDense uses Re-ID similarity alone. It can be observed from Table 3 that using the EG matrix instead of the EM matrix can enhance the tracking performance and improve the tracking speed. Taking the JDE[15] method as an example, only using EG matrix to replace the EM matrix can improve the HOTA from 50.1 to 50.9, IDF1 from 63 to 64.4, MOTA from 59.3 to 59.5, FPS from 16.64 to 21.29 and decreases the IDs from 621 to 558. Combined with the BYTE strategy, our EG matrix still improves the HOTA from 50.4 to 50.9, IDF1 from 64.1 to 64.4, FPS from 18.52 to 25.48 and decreases the IDs from 437 to 388.


50.1 63 59.3 621 16.64
50.9 64.4 59.5 558 21.29
50.4 64.1 60.2 437 18.52
50.9 64.8 60.1 388 25.48


57 72.4 69.1 372 21.01
57.5 73.3 69.5 236 25.18
- 74.2 70.4 232 -
58.5 74.5 70.6 188 24.7


58.7 72.0 67.9 423 20.39
59.3 73.0 68.2 322 24.3
59.8 73.9 69.2 298 20.72
60.0 73.8 69.6 249 24.25


58.6 71.7 68.3 293 15.8
58.4 71.2 68.9 263 16.22
59.0 71.5 68.5 483 16.5


56.2 67.7 67.1 386 4.1
58.5 71.9 67.4 295 4.8
57.9 70.9 67.5 252 4.8


Table 3: Results of applying SimpleTrack to 5 different JDE-based trackers on the MOT17 validation set. Blue represents the tracking method using only the EG matrix, and red represents the tracking method combining the EG matrix and BYTE.

4.3.4 The Accuracy Compared with other Association Methods.

We compare SimpleTrack with other association methods, including the recent SOTA algorithm BYTE and the tracking algorithm used in JDE-based methods[17, 15, 11, 12]. As shown in Table 4, SimpleTrack improves the IDF1 metric of JDE from 76.1 to 78.5, MOTA from 71.4 to 72.5, HOTA from 60.2 to 61.7 and decreases IDs from 451 to 182. Compared with BYTE, we can see that SimpleTrack improves the IDF1 from 75.7 to 78.5, HOTA from 60.4 to 61.7, and decreases IDs from 285 to 182. These demonstrate that our tracking method is more effective than the JDE strategy, and it can improve the accuracy of data association compared to the BYTE strategy.

Track Method
JDE 76.1 71.4 60.2 451 3319 11655 19.7
BYTE 75.7 72.5 60.4 285 3510 11048 25
SimpleTrack(Ours) 78.5 72.5 61.7 182 3212 11456 23.8
Table 4: Comparison of different association methods on the MOT17 validation set. JDE expresses the tracking strategy employed by [11, 12, 15, 17] and BYTE expresses the tracking strategy employed by [10]. The best results are shown in bold.

4.3.5 The Speed Compared with other Association Methods.

From Table 4 and Table 2, we can observe that our SimpleTrack algorithm utilizes the embedding information but is still nearly 20% faster than JDE’s tracking strategy. A more detailed comparison of different video sequences can be observed in Figure 4 (a). It can be observed that our tracking algorithm is only slightly slower than BYTE which does not utilize the embedding information. According to Figure 4 (b), we can see time-consuming of the main modules in the tracking phase. It shows that the JDE-based tracking strategy spends a lot of time in fusing the embedding and motion information, which is represented by the orange dotted square in Figure 4 (b). For the EG matrix, we only need to calculate the Giou distance and add it with the embedding distance. The time consumption is represented by the orange dotted star in Figure 4 (b).

Figure 4: Comparison of different tracking algorithm speeds. (a) shows the tracking speed of different tracking algorithms. (b) shows the time-consuming of several main modules in the tracking phase.

4.3.6 Comparison with Preceding SOTAs.

In this part, we compare the performance of SimpleTrack with preceding SOTA methods on MOT17 and MOT20. The results are reported in Table 5 and Table 6, respectively. As shown in these two tables, SimpleTrack has come out among the top in various metrics and surpassed the contrasted counterparts by large margins, especially on the HOTA, IDF1 and IDS metrics. Besides, compared with other MOT tracking methods, SimpleTrack has obvious speed advantage.

4.3.7 Visualization Results.

We show some scenarios that are prone to identity switching in Figure 2 which contains 3 sequences from the half validation set of MOT17. We use different tracking strategies to generate the visualization results. It can be observed that SimpleTrack can effectively deal with the identity switching problem caused by the occlusion of the tracking targets. In addition, some tracking examples on the MOT17 test datasets are shown by Figure 5.

TraDes[26] 52.7 63.9 69.1 3555 20892 150060 17.5
MAT 53.8 63.1 69.5 2844 30660 138741 9.0
QuasiDense[14] 53.9 66.3 68.7 3378 26589 146643 20.3
SOTMOT[40] - 71.9 71.0 5184 39537 118983 16.0
TransCenter[41] 54.5 62.2 73.2 4614 23112 123738 1.0
PermaTrackPr[42] 55.5 68.9 73.8 3699 28998 115104 11.9
TransTrack[29] 54.1 63.5 75.2 3603 50157 86442 10.0
FUFET[24] 57.9 68.0 76.2 3237 32796 98475 6.8
FairMOT[17] 59.3 72.3 73.7 3303 27507 117477 18.9
CSTrack[11] 59.3 72.6 74.9 3567 23847 114303 15.8
Semi-TCL[43] 59.8 73.2 73.3 2790 22944 124980 -
ReMOT[44] 59.7 72.0 77.0 2853 33204 93612 1.8
CrowdTrack[45] 60.3 73.6 75.6 2544 25950 109101 -
CorrTracker[25] 60.7 73.6 76.5 3369 29808 99510 15.6
RelationTrack[18] 61.0 74.7 73.8 1374 27999 118623 8.5
SimpleTrack(Ours) 61.0 75.7 74.1 1500 17379 127053 22.53
SimpleTrack(Ours)* 61.6 76.3 75.3 1260 22317 116010 -

* indicates adding linear interpolation

indicates JDE-based methods
Table 5: Comparison of the state-of-the-art methods under the ”private detector” protocol on the MOT17 test set. The best results are shown in bold. MOT17 contains rich scenes and half of the sequences are captured with camera motion.
MLT[46] 43.2 54.6 48.9 2187 45660 216803 3.7
FairMOT[17] 54.6 67.3 61.8 5243 103440 88901 13.2
TransCenter[41] - 50.4 61.9 4653 45895 146347 1.0
TransTrack[29] 48.5 59.4 65.0 3608 27197 150197 7.2
Semi-TCL[43] 55.3 70.1 65.2 4139 61209 114709 -
CorrTracker[25] - 69.1 65.2 5183 79429 95855 8.5
CSTrack[11] 54.0 68.6 66.6 3196 25404 144358 4.5
GSDT[43] 53.6 67.5 67.1 3131 31913 135409 0.9
SiamMOT[12] - 67.8 70.7 - 22689 125039 6.7
RelationTrack[18] 56.5 70.5 67.2 4243 61134 104597 2.7
SOTMOT[40] - 71.4 68.6 4209 57064 101154 8.5
SimpleTrack(Ours) 56.6 69.6 70.6 2,434 18400 131209 7.0
SimpleTrack(Ours)* 57.6 70.2 72.6 1785 25515 114463 -
* indicates adding linear interpolation
indicates JDE-based methods
Table 6: Comparison of the state-of-the-art methods under the ”private detector” protocol on the MOT20 test set. The best results are shown in bold. The scenes in MOT20 are much more crowded than those in MOT17.
Figure 5: Tracking results of SimpleTrack on the MOT17 test dataset.

5 Conclusions

We proposed a simple tracking framework called SimpleTrack for data assocaition and multi-object tracking. SimpleTrack was implemented by developing a simple and effective similarity matrix (called EG matrix), which combines the embedding and Giou distance. The proposed EG matrix can improve not only the tracking effect but also the speed of JDE-based tracking methods. Moreover, we also proposed a bottom-up feature fusion module for decoupling Reid and detection tasks, and design a tracking strategy for JDE architecture by combining the BYTE strategy and EG matrix. The results show that SimpleTrack is very competitive, and we hope that the EG matrix will facilitate the development of the JDE-based methods.