Multiple Object Tracking (MOT) is one of the most critical middle-level computer vision tasks with wide-range applications such as visual surveillance, sports events, and robotics. Owing to the great success of object detection techniques, detection based paradigm dominates the community of MOT. The critical components of the paradigm include anaffinity model telling how likely two objects belong to a single identity, and a data association method that links objects across frames, based on their affinities, so as to form a complete trajectory for each identity.
Tracklet-based association is a well-accepted approach in detection-based MOT [16, 36, 34, 31]. It is usually constructed by two stages: In stage I, we link detection responses in the adjacent frame using straightforward strategies to form short tracklets. In stage II, we mainly perform two tasks: extract much finer features from the tracklets, including temporal and spatial, appearance and motion data to construct a tracklet-level affinity model, and then perform graph-based association across all of them, and conduct necessary post-processing. There are two advantages of this approach, compared to associations on detection responses directly. With tracklet-based association, the number of connected components is significantly brought down so that investigating detection dependency across distant frames is computationally affordable. Besides, it is capable of extracting high-level information, while reducing bounding box noises brought by bad detectors .
There are various ways to define the affinity model in stage I, like bounding box intersection-over-union(IOU), spatial-temporal distance, appearance similarity, etc. The harder part exists in stage II. For the affinity model, traditional hand-crafted features or individually learned affinities do not work well [16, 37]
, due to the lack of data-driven properties in joint consideration of multiple correlated association choices. For the association, it is regular to use a global optimization algorithm, such as linear programming or network flow, to link these short tracklets. However, it is non-trivial to define a proper cost function for these approaches. Earlier trackers use hand-crafted cost functions and perform an inference afterward. Sometimes, they have to use grid search and empirical tuning to find the hyper-parameters producing the best outcome.
We propose Tracklet Association Tracker (TAT), an improved bi-level optimization framework compared to work of Schulter et al.  in three key aspects. First, we use deep metric learning to extract the appearance embedding for each detection response; Second, we introduce tracklet to the framework, not only accelerating the computation but also provides motion dependency. Last but not the least, we adopt an approximate gradient that significantly improves the association model training process. By clarifying the boundary of cost values, the framework ensures convergence can always be achieved and includes all cost parameters into the end-to-end training process while retaining high accuracy.
All in all, our contributions include:
We introduce tracklet association into the bi-level optimization framework. By exploiting tracklets, our system improves the performance on long time occlusion.
We conduct comprehensive discussions on the impact of each component we introduce. Besides, we give a quantitative evaluation on the importance of alignment and noisy outlier removal, which shows both ancient and modern detectors can benefit from these strategies.
2 Related Work
Since the tracking-by-detection approach becomes the mainstream in multi-target tracking, data association is regarded as the core part of MOT 
. There are a lot of ways to solve data associations. Widely adopted probabilistic inference methods like Kalman filter, extended Kalman filter  and particle filter 
rely on the first-order Markov assumption to estimate position in the new frame based on previous states and present observations. New detections can be assigned locally between adjacent or nearby frames using bipartite algorithms such as the optimal Hungarian algorithm[16, 35], or k-partite graph matching . Multiple Hypothesis Tracking (MHT) [1, 17] postpones determinating ambiguous matches until enough information obtained. It results in a combinatorially increasing search space so that the hypothesis tree is pruned regularly. These local data association based tracking are sensitive to occlusion and noisy detections.
Tracking algorithms with global or delayed optimization [1, 17] try to produce longer and more robust trajectories by considering more frames or even the entire sequence in one shot. A popular paradigm is to formulate the task of MOT as an equivalence of solving an extremum of graph [31, 32, 33, 36, 3]. For instance, multi-cut-based approaches[31, 32, 33] try to decompose graph into isolated components with multi-cut algorithm, so that detections in same component belong to same identity. Generalized Minimum Clique Graph treats detection association as to find the minimum clique in a corresponding graph. However, these graph-based approaches are NP-hard, indicating that only sub-optimal solution can be achieved even with expensive approximate methods.
Exceptionally, network-flow-based tracking [37, 24, 10, 4] uses graph formulation and can be solved in polynomial time. It restricts the cost function to contain only unary and pairwise terms to achieve efficient inference. For instance, the work of Zhang et al.  and Pirsiavash et al.  both assume logarithm cost functions and solve min-cost max-flow by push-relabel or successive shortest path algorithms. Dehghan et al.  use network flow to simultaneously predict detections and associate identities. Butt and Collins  encode neighboring connections differently so they leverage network flow to capture the relationships among three consecutive frames. Our work is most similar to the work of Schulter et al.  in that we both formulate the MOT problem in network flow paradigm and solve it by a bi-level optimization problem. Our work differs in the hierarchical design and efficiency.
Recently, people use deep learning to improve the tracking performance. DeepMatching 
applies a convolutional neural network (CNN) to yield non-rigid matching between image pairs. Sadeghianet al. 
encode history trajectory into embeddings by long short-term memory(LSTM) and compute embedding affinity with present detections. Quadruplet CNN  simultaneously train a multi-task loss to jointly learn object appearance and bounding box regression, and adopt minimax label propagation for matching. Instead, our work uses triplet loss and solves the association problem using a learnable association framework.
3 MOT Framework
Fig. 1 illustrates the fundamental steps in TAT. The components include 1) a proposal aligner, a proposal selector and a triplet network to achieve accurate appearance model; 2) a tracklet generation module to connect neighboring bounding boxes; 3) an end-to-end bi-level optimization tracklet association module to associate tracklets. 4) subgraph merging and post-processing module to propose final trajectories. In this section, we will elaborate on each step.
3.1 Appearance Model
Proposal aligner. To train an appearance model with high discriminative ability, it is essential to have bounding boxes aligned with targets; otherwise, the appearance model is prune to obscurity. Ancient detectors suffers from localization accuracy 
, while modern detectors are limited by the variance of preset anchor sizes and aspect ratios
. Thus, a secondary alignment is beneficial for that detections can be treated as better anchors compared to preset ones. Hence, we adopt a region proposal aligner with convolutional neural network architecture. It takes the slightly padded image patches of the corresponding detection responses as input, the aligned bounding box offsetand their respective classification scores as output. It allows us to treat the raw detections as anchors, and to perform a regression enhancement based on the accurate baseline.
Proposal selector. After aligning the detection boxes, there are cases where two boxes of a single target overlap larger than before. We use non-maximum-suppression (NMS) to remove duplicates. Further, we use the classification score of the proposal aligner as a new indicator, naming it humanity
for that it reflects the probability of the target being a human w.r.t. the new box coordinates. We filter out the boxes with both low humanity and detection score because true positives are unlikely to perform badly simultaneously in two measures, hence we have high confidence to regard the removed boxes as false positives. Through proposal selection, the remaining proposals are cleaner and are less likely to result in redundant overlapping trajectories.
Triplet Network. In a surveillance video, it is common to assume that human appearances do not change much in a short period. Based on the cleaned proposals processed by the proposal aligner and selector, we use metric learning [28, 6, 20, 34, 30] to learn an embedding for each tracking target candidate so that the distance between two targets with the same identity is smaller than that with different identities. We use the triplet loss as the training goal:
where denotes an instance of triplet where is the anchor, is a candidate from the same trajectory (positive sample), and is a candidate from a different one (negative example). denotes the Euclidean distance between and , which is called the appearance distance.
We apply a convolutional neural network to learn the embedding , with the architecture illustrated in “Triplet Network” module of Fig. 1. The convolutional feature maps of original target images are flattened, fed into the fully connected layers and finally normalized by an L2-Normalization (L2-Norm) layer. The output of L2-Norm is the 128-dimensional appearance embedding, .
We adopt online sampling to generate more instances. The sampling strategy includes: 1) we sample persons and instances of each per batch; 2) we divide each trajectory into segments so that the temporal distance within a segment is no longer than ; 3) in each batch, we select one segment to draw samples of a person, with target detection boxes randomly shifted around at each frame as data augmentation; 4) we add in the targets that the detector has missed (but in the labeled ground truth) into the training set.
Similar to FaceNet , we make full use of every positive sample pair, but only use the most violating negative samples selected by hard negative mining, together with randomly drawn violating samples to construct the triplet set . Consequently, the process produces a total number of triplets per batch.
3.2 Two-level Association
The difficulty of data association is different in sparse and dense scenes. Inspired by Huang et al. , we use a two-level association paradigm. The method connects neighboring boxes into tracklets at the low level and performs the end-to-end tracklet association at the high level.
3.2.1 Tracklet Generation
Performing low-level association with simple model helps us cut down the number of nodes and candidate edges. To learn a robust low-level affinity model, we take both appearance and spatial features into consideration as the input of the affinity model. Then we use the Hungarian algorithm  based on the output of the affinity model to decide whether candidate pairs between adjacent frames should be matched. The track formed by the matched boxes, a.k.a. tracklets, is used in the second stage association.
Appearance feature. We use the embedding distance between two candidates of the pair as appearance feature, denoted as (from Section 3.1). The metric reflects how similar two candidates look alike in visual.
Spatial feature. For low-level association, we only take the candidate pairs in adjacent frames into consideration. We define the relative position distance as
where denotes point-size coordinates of the two bounding boxes of the candidate pair.
Feature fusion. Concatenating the appearance distance
, the relative position distance, along with humanity score for the two patches as feature input, we train an MLP classifier to predict the affinity scorebetween candidate pairs. A score close to indicates the pair bounding boxes belong to same trajectory and score close to otherwise. Assuming there are detections in frame and detections in frame , we construct a matching matrix from affinity score of the candidate pairs. Then we use the Hungarian algorithm to calculate matching pairs. From a conservative perspective, we only keep matching results with very high confidence at this stage by applying the Eq.(3) of , and leave uncertain matchings to next association phase where we consider more extended sequence information.
3.2.2 Tracklet Associaton
Given tracklets from Section 3.2.1, we train an association model based on an end-to-end learnable network flow formulation to associate the tracklets into trajectories.
Problem Formulation. Fig 1 illustrates the structure of our network flow paradigm in “End-to-end Tracklet Association” module. We use nodes to represent tracklets, and edges to represent candidate tracklet pairs that may be associated. The source and sink are two auxiliary nodes to indicate the initialization and termination of trajectories.
The bi-level optimization problem for MOT is formulated as follows . We solve for the parameter with:
where represents whether the unary, pairwise, source-to-node and node-to-sink edges are connected, and are their corresponding costs. The weights is a hyper-parameter balancing the importance of each edge. For more details on definition of , please refer to the work of Schulter et al. . The earlier work  uses log-barrier and basis substitution to eliminate constraints, and achieves the expected partial derivative of (Eq. (10) of ). We find the solution to be unnecessary and defective, so we give our solution which can achieve the goal without the complicated matrix multiplication, as stated later in this section.
Based on the problem formulation, we make the following three significant improvements:
Improvement 1: Using tracklet-level features. We define the tracklet set as . We model the unary cost and pairwise cost as and that are fit by multi-layer perceptrons (MLP). is the unary tracklet feature, and is the feature extracted from tracklet pairs. are MLP parameters w.r.t. unary and pairwise functions. For a pair of connected tracklets and , and denote the frame at the tail of and the head of , respectively. Variables , , and are the humanity scores, detection scores, and the average embedding of tracklet . and denote the area and aspect ratio of the bounding box. Furthermore, we denote the time gap between two connected tracklets and as . We use and to denote the forward and backward Kalman filter estimated position for (from to ) and (from to ), respectively. We also include in the absolute position distance between and , denoted as , in case that Kalman Filter does not work robustly on short tracklets. We divide the position distance terms by to support association across a long time period. With these notations, we can write and as
Improvement 2: Fixing training deficiency with approximate gradient. To solve the bi-level optimization problem Eq. 3- 4, we find it is unnecessary to calculate the partial derivative precisely , not to mention the defect of the gradient formula they provide. The explaination is three-fold:
1) When the temperature , Eq. 8 results in an unstable gradient of . when approaches , is large, and the corresponding gradient becomes small, as a result the loss descends slowly at the beginning. On the other hand, when approaches 0.5, is relative small, increases sharply because of the large temperature. These extreme gradient values harms the training paradigm of deep learning, which usually uses fixed learning rates that are independent to the gradient order of each iteration. It’s easily to get stuck or start oscillating during training. 2) Eq. 7 does not guarantee the movement smoothness of because is not invertible. It indicates that even though is moving towards a dedicated direction temporarily, the impact on is not predictable. 3) Moreover, When moving to a combination of values of and , the original constrained linear programming becomes hard to solve as it takes a long time to jump out of the bad point.
Thus, we doubt the plausibility of the chain rule after performing the basis substitution in Eq.4. However, the formulation is still valuable, because we can use an approximate gradient rather than the accurate gradient to solve the above problems. Our idea is quite straightforward but unexpectedly works well: intuitively, if , it means is larger than the expected value (in our case, it also indicates that ). Then is supposed to be turned down towards so that its corresponding edge is not chosen into the final trajectory. To achieve this, the corresponding cost should be enlarged. In contrast, if , we should reduce the value of so that is increasing towards . To save computation, we simply set .
Improvement 3: Bound the range of cost parameters. Also, we find Schulter et al. 
do not constrict the value of the cost vector. It is risky because we never know what value the cost will converge to. In our experiments, the value of sometimes diverges to infinity if the learning rate is not carefully set. Even converged, the final converge point is not predictable, and the computation of the constrained LP problem with large value is costly. Instead, we bound the value of to range by adding a function to the output of the MLP, and we can arbitrarily initialize to some constant . Fig 2(b) shows the absolute value of and finally converges at the line of , indicating that and can be automatically adjusted to the boundary of tracklet TP and FP: , only if the value of is reachable.
Fig. 2(a) shows the learning curve by using the approximate gradient. In practice, we find the training speed with our approximation approach is over larger than the original, without any sacrifice of performance. It is because the accurate gradient in Eq. 7 contains high dimensional matrix multiplication and inversion, while the approximate one is a lot cheaper.
Other parameter choices. To deal with long video sequences, we use a -frame sliding window to segment video sequences into subgraphs. We can use a larger window and steps because tracklets generates much fewer candidate pairs comparing to detection responses. Tracklets with head or tail locating in the temporal window are included as nodes in the subgraph. We set the stepping size to to preserve the overlapping necessary for merging subgraph associations into the longer tracks. We use Hungarian matching over intersections between tracklet sets to merge these subgraphs. An example of subgraph merging result shows in Figure 1. Here, rectangle blocks are tracklets with their widths showing different tracklet lengths, and three subgraphs generate three hypothetic long tracks. As depicted in the figure, Subgraph 1 produces three intermediate long tracks while Subgraph 2 produces four. These intermediate tracks generate a matching matrix in , with elements being the number of overlapping blocks between tracks. Then the Hungarian algorithm is used to find the matching with largest overlaps, and merge-sort is used to combine tracklets in order.
The setting of weights in Eq. 3 for tracklet fusion is more straightforward than those at the detection response level. Existing work  empirically sets lower for a punishment of ambiguous links like FP-FP(two false positives) and TP-TP+Far(true positives with the same identity but distant). However, in the case of tracklets, the weighting directly reflects on the final metric MOTA. For instance, a FP edge between tracklets leads to FP boxes in the final trajectory; in contrast, a FN pair losses expected boxes in the final trajectory, and same deduction applies to unary terms. Thus, we set the weights of to 1, the weights of to the length of tracklets, and the weights of to the time gap between tracklet pairs. Our experiments in Section 4 show that the weighting strategy improves performance.
3.3 Post Processing
There are certain detection gaps between tracklets caused by occlusions, and we fill them in post-processing. Bilinear interpolation is a general approach, but it is prone to introduce errors. To validate the legitimacy of the interpolated boxes (a.k.a virtual patches), we feed the corresponding virtual patches into Proposal Alignment Network to get the humanity and regressed boxes, and compute appearance distance corresponding to the last box in the filled trajectory. We discard the interpolated boxes with low humanity or appearing very different from others in trajectory.
4.1 Implementation Details
Datasets. We evaluate our approach on both the 2D MOT2016 and MOT2017 Benchmarks. MOT2016 offers a variety of 14 video sequences (7 for training and 7 for testing)  which are captured by static or moving cameras indoors and outdoors. MOT2016 provides with the detection responses of DPM detector for training and testing. MOT2017 contains the same sequences but provides with more precise ground-truth annotations. The organizers release the detection results of Faster R-CNN and SDP in addition to that of DPM so that researchers are supposed to submit tracking results under the three detectors.
Platform. All of our experiments are conducted on a 1.2GHZ Intel Xeon server with 8 NVIDIA TITAN X GPUs. The deep learning framework we use is MXNet .
We use an ImageNet pre-trained ResNet-50 as the CNN backbone for both the Proposal Aligner and the Triplet Network. We set the input size to
, and use ReLU for activation and ADAM optimizer to train the networks. The base learning rate is set to , , respectively. The hyper-parameters of sampling are set as: .
Bi-level optimization training. We use a three-layer MLP with Leaky ReLU activation for the pairwise network. The reason of using Leaky ReLU rather than ReLU is that the regression target is near a single value: both and are expected to converge at around of so that the neural work tends to move simultaneously at same pace for all samples, and using ReLU may wipe out sample difference at very beginning. We choose cvxpy  as the convex problem solver, and also ADAM  as the MLP optimizer. We set the value of to and to in all experiments.
Metrics. For evaluation, we report the following metrics: 1) the most commonly used metric, , which provides a combination of the false positive (FP), false negative (FN) and ID switch (IDS) frequency among all trajectories against the ground truth (GT); 2) mostly tracked (MT) and 3) mostly lost (ML) that provide an indication of the trajectory fragmentation.
4.2 Comparison to the State-of-the-art
Table 1 shows our MOT2016 benchmark results in comparison with the current competitive approaches. We observe that 1) our MOTA of outperforms all published results as of March of 2018. We believe this high MOTA is a direct result of low FN of , attributing to both the alignment and association approach.
Table 2 shows our result on MOT2017 benchmark. We haven’t performed any fine-tuning on this dataset based on MOT2016, and 3 detectors share the same set of configuration. Our result ranks the 3rd place with MOTA of 51.5 by April 9th, 2018, with the 1st and 2nd place being non-published. The performance on Faster R-CNN is the highest on board. By fine-tuning the parameters, it’s possible to achieve higher performance. We choose not to do so but only show the fact that our method applies to any detectors, and better detector benefits tracking result.
4.3 Ablation Study
To give a transparent demonstration of the impact of each component we have introduced, we perform comparative experiments at each stage on MOT2016. For all experiments in this section, we use the last two training sequences (MOT16-11, MOT16-13) for validation and the rest for training.
Proposal aligner & proposal selector. We evaluate the effects of the proposal aligner and proposal selector, which have shown significant improvement on the final result. We denote the Aligner as AL, non-maximum suppression and the score filtering of the selector as S-NMS and S-SF. We compute the IOU ratio against the ground truth. We define an IOU greater than 0.5 as a true positive (TP), and otherwise as a false positive (FP). Under this setup, we get TPs, FPs from the two validation sequences. We separately evaluate these elements, and use the best association setting of TAT to do the two-level association. Table 3 summarizes the results.
We notice that by performing the proposal alignment, we can rectify FPs to TPs in the validation set. It indicates that tighter boxes can not only benefit feature extraction but also explicitly enhance the recall of proposals. Besides, by performing S-NMS and S-SF, the number of FP decreases from to , with a sacrifice of only TPs. The MOTA after alignment and selection is 13.9% larger than that of raw input. To clarify, we use 0.7 as NMS threshold, and remove the boxes with humanity lower than 0.1 and the detection score lower than 0 in this experiment. Moreover, the experiment shows MOTA decreases slightly in MOT16-13. It is because the sequence contains many people in shadow, which is a rare scenario in the training set. Thus they are assigned low humanity, and S-SF filters them out. However, the case happens rarely in general circumstances, and we employ all these strategies in other experiments.
To validate whether all detectors benefit from the aligner and selector, we report the statistics on the MOT2017 dataset in Table 4. From training set, we see remarkable FP decrease on DPM and respectively a high improvement on MOTA. While for Faster R-CNN and SDP which have coordinate regression module in their algorithms, the benefit is less but still visible. It’s partially because for these detectors the alignment is good enough.
Using tracklets vs. detection responses directly. To demonstrate the improvements that tracklets bring, we train an end-to-end association model directly on features extracted from the detection responses. The feature extraction is the same as in Section 3.2.1. We set to 5 frames and step size to 1 to accommodate the large candidate pairs and ensure overlapping. The two experiments achieve same MOTA at 35.9, while TAT runs much faster than the other. Moreover, TAT achieves best performance 37.9 at as shown in Fig 4. In the same configuration, we cannot even obtain a model on detection response level due to the large number of search space.
End-to-end Learned vs. Hand-crafted Affinity. The significant improvement of our association method is to incorporate learned-features and cost parameters. To evaluate the improvements, we compare the following two association methods with TAT.
[NETFLOW] As a comparative method, we use an independent affinity model together with a standalone inference method to replace the bi-level optimization association paradigm. The proposal alignment, selection and tracklet generation are performed as the same. We train a 2-class MLP classifier as the affinity model, with label 1 representing tracklets of same target and label 0 representing tracklets of different targets. We denote the output of the MLP as , and manually design the edge cost as , the node cost as a product of 1 minus humanity, i.e. . Then we apply the Algorithm 1  as tracklet association approach.
[E2EP] It is a transition method between TAT and [NETFLOW]. It also uses an end-to-end model, except that we use the same unary feature as [NETFLOW]. However, to avoid applying grid search for , we use a linear model to learn the direction and bias. Thus we modify , and set to 0.7.
[TAT ] We use both learnable unary and pairwise terms for TAT, with features defined in Section 3.2.2. We construct the unary and pairwise MLP with [8, 4, 1] and [256, 256, 1] network architecture, respectively.
For each method, we conduct experiments using window sizes ranging from 10 to 100 frames. Fig 4 shows the results on the validation set. Our key observations are:
1) Both TAT and [E2EP] outperform [NETFLOW]. This improvement shows the effectiveness of the end-to-end training.
2) When the window size is small, TAT significantly outperforms the other two methods, thanks to the automatically tuned cost parameters.
3) When the window size is over 30 frames, both TAT and [NETFLOW] show a significant drop in performance, except for [E2EP]. The drop is because performing long-term association has little benefit when 95% expected links are less than 30 frames. In contrast, it brings in a risk of obscuring the pairwise features because of . Thus, the false connections will result in more false positives interpolated. However, E2EP is robust to the large window size. It is because the hand-crafted feature has a clear intrinsic relation to affinity, which makes the association results more stable when becomes inferior.
Weighting. We claim that tracklet weight in Equation 3 should be related to the tracklet length (TL) and time gap of connections (TG), whose errors directly reflect on the MOTA. From Table 5 we notice the major improvement comes from TL weighting, which matches our expectation that it costs more to make mistakes on longer tracklets.
5 Conclusion and Future Work
Unlike many other tasks in computer vision that adopts end-to-end training, MOT still requires much hand-tuning and optimizations in various stages. TAT brings MOT an extra step closer to fully end-to-end. With TAT, we combine the classic tracklet-based association into the new bi-level optimization framework. It is easy to integrate additional features since they can be jointly considered in the end-to-end learning framework, and the training process converges much more stable and faster by using the approximated gradient.
Combining tracklets and end-to-end training opens many opportunities for future improvements. To begin with, we can encode human interactions in the tracklet feature. Furthermore, we can adopt LSTM-based tracklet features, so that the tracker can perform end-to-end learning from raw data. Lastly, we can investigate how to improve linking consistency around short tracklets in a learning-based association framework.
-  S. S. Blackman. Multiple hypothesis tracking for multiple target tracking. Aerospace and Electronic Systems Magazine (AESM), 19(1):5–18, 2004.
-  M. D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and L. Van Gool. Robust tracking-by-detection using a detector confidence particle filter. In International Conference on Computer Vision (ICCV), pages 1515–1522. IEEE, 2009.
W. Brendel, M. Amer, and S. Todorovic.
Multiobject tracking as maximum weight independent set.
Computer Vision and Pattern Recognition (CVPR), pages 4091–4099. IEEE, 2011.
-  A. A. Butt and R. T. Collins. Multi-target tracking by lagrangian relaxation to min-cost network flow. In Computer Vision and Pattern Recognition (CVPR), pages 1846–1853. IEEE, 2013.
-  T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang. MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015.
-  W. Chen, X. Chen, J. Zhang, and K. Huang. Beyond triplet loss: a deep quadruplet network for person re-identification. In Computer Vision and Pattern Recognition (CVPR), volume 2. IEEE, 2017.
-  W. Choi. Near-online multi-target tracking with aggregated local flow descriptor. In International Conference on Computer Vision (ICCV), pages 3029–3037. IEEE, 2015.
-  R. T. Collins. Multitarget data association with higher-order motion models. In Computer Vision and Pattern Recognition (CVPR), pages 1744–1751. IEEE, 2012.
-  A. Dehghan, S. Modiri Assari, and M. Shah. GMMCP tracker: Globally optimal generalized maximum multi clique problem for multiple object tracking. In Computer Vision and Pattern Recognition (CVPR), pages 4091–4099. IEEE, 2015.
-  A. Dehghan, Y. Tian, P. H. Torr, and M. Shah. Target identity-aware network flow for online multiple target tracking. In Computer Vision and Pattern Recognition (CVPR), pages 1146–1154. IEEE, 2015.
S. Diamond and S. Boyd.
CVXPY: A python-embedded modeling language for convex
Journal of Machine Learning Research (JMLR), 17(83):1–5, 2016.
-  P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE transactions on Pattern Analysis and Machine Intelligence (PAMI), 32(9):1627–1645, 2010.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), pages 770–778. IEEE, 2016.
-  R. Henschel, L. Leal-Taixé, D. Cremers, and B. Rosenhahn. Improvements to Frank-Wolfe optimization for multi-detector multi-object tracking. arXiv preprint arXiv:1705.08314, 2017.
-  S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
-  C. Huang, B. Wu, and R. Nevatia. Robust object tracking by hierarchical association of detection responses. In European Conference on Computer Vision (ECCV), pages 788–801. Springer, 2008.
-  C. Kim, F. Li, A. Ciptadi, and J. M. Rehg. Multiple hypothesis tracking revisited. In International Conference on Computer Vision (ICCV), pages 4696–4704. IEEE, 2015.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  H. W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics (NRL), 2(1-2):83–97, 1955.
-  L. Leal-Taixé, C. Canton-Ferrer, and K. Schindler. Learning by tracking: Siamese CNN for robust target association. In Computer Vision and Pattern Recognition Workshops, pages 33–40. IEEE, 2016.
-  W. Luo, J. Xing, X. Zhang, X. Zhao, and T.-K. Kim. Multiple object tracking: A literature review. arXiv preprint arXiv:1409.7618, 2014.
-  A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler. MOT16: A benchmark for multi-object tracking. arXiv preprint arXiv:1603.00831, 2016.
-  D. Mitzel and B. Leibe. Real-time multi-person tracking with detector assisted structure propagation. In International Conference on Computer Vision (ICCV) Workshop, pages 974–981. IEEE, 2011.
-  H. Pirsiavash, D. Ramanan, and C. C. Fowlkes. Globally-optimal greedy algorithms for tracking a variable number of objects. In Computer Vision and Pattern Recognition (CVPR), pages 1201–1208. IEEE, 2011.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), pages 91–99. NIPS Foundation, 2015.
-  M. Rodriguez, J. Sivic, I. Laptev, and J.-Y. Audibert. Data-driven crowd analysis in videos. In International Conference on Computer Vision (ICCV), pages 1235–1242. IEEE, 2011.
-  A. Sadeghian, A. Alahi, and S. Savarese. Tracking the untrackable: Learning to track multiple cues with long-term dependencies. arXiv preprint arXiv:1701.01909, 2017.
F. Schroff, D. Kalenichenko, and J. Philbin.
FaceNet: A unified embedding for face recognition and clustering.In Computer Vision and Pattern Recognition (CVPR), pages 815–823. IEEE, 2015.
-  S. Schulter, P. Vernaza, W. Choi, and M. Chandraker. Deep network flow for multi-object tracking. arXiv preprint arXiv:1706.08482, 2017.
-  J. Son, M. Baek, M. Cho, and B. Han. Multi-object tracking with quadruplet convolutional neural networks. In Computer Vision and Pattern Recognition (CVPR), pages 5620–5629. IEEE, 2017.
-  S. Tang, B. Andres, M. Andriluka, and B. Schiele. Subgraph decomposition for multi-target tracking. In Computer Vision and Pattern Recognition (CVPR), pages 5033–5041. IEEE, 2015.
-  S. Tang, B. Andres, M. Andriluka, and B. Schiele. Multi-person tracking by multicut and deep matching. In European Conference on Computer Vision (ECCV), pages 100–111. Springer, 2016.
-  S. Tang, M. Andriluka, B. Andres, and B. Schiele. Multiple people tracking by lifted multicut and person re-identification. In Computer Vision and Pattern Recognition (CVPR), pages 3539–3548. IEEE, 2017.
-  B. Wang, G. Wang, K. L. Chan, and L. Wang. Tracklet association by online target-specific metric learning and coherent dynamics estimation. IEEE transactions on Pattern Analysis and Machine Intelligence (PAMI), 39(3):589–602, 2017.
-  J. Xing, H. Ai, and S. Lao. Multi-object tracking through occlusions by local tracklets filtering and global tracklets association with detection responses. In Computer Vision and Pattern Recognition (CVPR), pages 1200–1207. IEEE, 2009.
-  A. R. Zamir, A. Dehghan, and M. Shah. GMCP-Tracker: Global multi-object tracking using generalized minimum clique graphs. In European Conference on Computer Vision (ECCV), pages 343–356. Springer, 2012.
-  L. Zhang, Y. Li, and R. Nevatia. Global data association for multi-object tracking using network flows. In Computer Vision and Pattern Recognition (CVPR), pages 1–8. IEEE, 2008.