The ubiquity of large-scale camera networks have coincided with the emergence of real-time video management platforms such as Chameleon , Kestrel , VideoStorm , and BriefCam . These tools allow human users to manage camera networks of thousands to hundreds of thousands of cameras, and query them manually to obtain live and archived video data summarization, mainly for forensics applications.
From the big data and machine learning (ML) research point of view, a major research challenge is the automation of video analytics to detect and track interesting objects and events. Metadata extraction and object tracking must be robust to streams with varying resolution and scale, along with camera artifacts such as different levels of blur and orientations. For practical applications, ML models also need to be extensible, so more classes of objects can be added as they become interesting. Finally, for real-time analytics, such tracking requires fast models due to the amount of data to be analyzed within a limited time (milliseconds per frame).
A typical vehicle tracking system consists of: (i) vehicle detection from frames, (ii) collection and integration of detections and metadata on the identified vehicle, and (iii) vehicle re-identification across different frames and cameras. Various approaches have been proposed (we cover Related Work in Section II), but they have performance issues due to significant computation requirements and extensibility issues due to assumptions on the vehicles, cameras, or other system components. To automate the process, effective knowledge acquisition models are necessary to detect and track relevant objects, infer missing metadata, and enable automated event detection.
In this paper, we propose reframing the typical large-scale video analytics pipeline from the common single-model approach to a novel teamed-classifier approach to deal with real-world video datasets with dynamic distributions. In a teamed-classifier approach, we build teams of models where each model, or a subset of models, is assigned to a subspace in the data distribution. We can contrast with ensembles. In a traditional bagging or stacked ensemble, each model in the ensemble is applied to the entire data space and weighted on either training performance or live drift-detected performance (dynamic weighted ensembles for drifting data are presented in [5, 6]). The prediction for an input is then: , where is a static weight for model , with usually assigned empirically. In contrast, our teamed classifier approach assigns an expert, or a family of expert classifiers, to a region of the input data space. We use a gating function to dynamically construct an ensemble during inference, similar to :
Our contributions can be summarized as follows:
A teamed classifier approach for video analytics, with focus on the vehicle re-identification task.
A general framework for re-identification that uses the naturally induces sparsity of the re-id task to build a sparse gating model with supervision. We evaluate our gating model on the Cars196 Zero-Shot learning task, where the goal is to cluster vehicle brands and models. We achieve state-of-the-art performance with normalized mutual information (NMI, used to evaluate clustering quality) of 66.03 compared to previous SoTA of NMI 64.4  and NMI 64.90 ).
A simple and strong baseline algorithm for the re-id task that can operate on the subspaces identified by the gating function. Our simple and strong base model is competitive with current state-of-the-art with an order of magnitude fewer parameters: we use approximately 12M parameters for our base model to achieve 64.4 mAP compared with more than 100M parameters for MTML-OSG  with 62.6 mAP (mAP, or mean average precision, is a metric for evaluating ranking and retrieval).
The teamed classifier approach addresses two intertwined technical challenges of high inter-class similarity and high intra-class variability:
High inter-class similarity
: The visual similarity of two different vehicle of the same model/year and color, due to the manufacturing process of vehicles. Therefore, identifying the vehicle model/year and color (the best a pixel-based image analysis algorithm can do) would still occasionally be insufficient when two such vehicles appear in the same frame.
High intra-class variability: Images of the same vehicle may look very different due to different orientations or environmental occlusion. For example, front-view image of a sedan looks quite different from rear-view image of the same sedan. Pure pixel-based image analysis may have difficulties with such differences.
Ii Related Work
Ii-a Classic Vehicle Tracking Approaches and Research Issues
A typical vehicle tracking system consists of several stages:
Object detection: Dense object detection in video streams has long been an integral part of video analytics. Various approaches have been devised for real-time detection of a wide variety of object types, such as vehicles, people, animals, and traffic signs, such as Mask-RCNN , YOLO, SSD-on-MobileNets .
Vehicle metadata: A important part of traffic management is tracking vehicle speed to ensure traffic safety and detecting speed limit infractions. While some specialized cameras in a surveillance network may be equipped with speed radar, such functionality is usually not found in common surveillance camera models. Recently, some proposed approaches perform speed detection in common cameras by tracking the vehicles in 3D space . Other metadata include vehicle type, brand, and color.
Vehicle re-identification: The re-identification task requires tracking vehicles across cameras and assigning them to correct identities. Challenges lie in re-id under adversarial conditions where vehicles need to be tracked with multi-orientation, multi-scale, multi-resolution values alongside possible occlusion and blur. The vehicle re-identification problem has seen significant work in the past few years due to advances in the general one-shot learning problem [10, 15, 16, 17, 18, 19, 20].
: Automated event detection remains a difficult challenge due to the lack of labeled real-world or synthetic data and absence of frameworks for video-based anomaly detection. A few approaches have been tested on simpler, small-scale data, such as LSTM-based or predictive coding  approach.
There have been several advancements towards some of these tasks. The video-management platforms mentioned also perform object detection using off-the-shelf models. For example,  uses pretrained detection models and performs time-stamp base summarization, similar to pretrained YOLO for vehicle detection in . The approach in  uses both YOLO and Mask-RCNN for object detection .
Vehicle re-identification. More recently, there have been approaches for end-to-end vehicle metadata extraction and re-identification in [10, 16, 17, 23, 19, 20]. OIFE  proposed stacked convolutional networks (SCN) to extract fine-grained features in conjunction with global features. 20 such keypoints on vehicles, such as headlights, mirrors, and emblem were labeled and extracted by SCNs to build feature masks. Global features and masked features are combined to create orientation invariant features for re-id. RAM 
approaches fine-grained feature extraction by splitting vehicle images into three regions and extracting features from each region separately. Features are combined with a fully-connected network for re-id. VAMI adds additional supervision to fine-grained feature extraction by using the viewpoint information of vehicles. Subnetworks are built for each vehicle viewpoint, and features from view-point subnetworks are combined for re-id. EALN  proposes addressing inter-class similarity with intra-class variability by using generated negatives: by using GANs to reconstruct images of existing vehicles, EALN can create potentially infinite training samples from a small dataset to improve inter-class similarity discrimination. MTML  combines ideas from RAM and VAMI by creating subnetworks for different orientations, scales, and color corrections. Features from subnetworks are combined for re-id. Finally, QD-DFL  proposes retaining spatial information in features by extracting diagonal and anti-diagonal features. Instead of only flattening convolutional features, diagonal features values are also used to improve re-id.
Single Models in Video Analytics. A common theme in the typical methods is the use of a single network for each task (Figure 1). Kestrel  uses the same model for all vehicle tracking. While Chameleon uses differently-sized models for low-fps, medium-fps, and high-fps streams, there is little variability beyond this – a single Mask-RCNN model is used for all high-fps videos, for example. Finally, most re-identification models propose a single network for all types of vehicles to perform simultaneous vehicle attribute extraction and identification. Each of the re-id models discussed uses end-to-end training; while a model may have subnetworks, they are used as a single model for each sample. Such approaches are effective in small-scale datasets without much variability. However, large-scale, real-world video networks have a variety of adversarial conditions that can limit single-model effectiveness. A demonstration of such adversarial condition based model degradation is provided in , where the authors examine some state-of-the-art re-identification models on adversarial re-id datasets with multi-scale, multi-resolution images along with occlusion and motion blur and find performance deterioration due to high dataset variability. Similarly, recent research in domain adaptation [25, 26] show different datasets that are visually similar for humans encode artifacts that can cause significant model deterioration. The authors of BlazeIt , a video querying framework, also make such an observation: video drift due to changes in the data distribution can lead to model performance degradation unless new models are added.
Open and Closed Datasets. One of the important issues in automated video analytics is the distinction between closed datasets that have finite underlying features under non-adversarial conditions and open datasets that are continuously evolving with potentially infinite underlying features. Most datasets used to train vehicle re-id or object detection models are closed datasets: their class distributions are fixed, and they encode a static set of features. This can lead to development of models that do not generalize to real-world data. Findings in generalizability studies in  and 
show model iteration on closed datasets lead to architectures and model weights that perform well on their respective test sets without generalizing to real-world data in the same domain (CIFAR-10 and ImageNet, respectively). This supports the findings in where models trained on one person re-id dataset significantly underperform on another, visually indistinguishable person re-id dataset. It is evident, then, that real-world analytics must take into account the open nature of real-world data where the underlying feature distribution is continuously evolving  and dataset drift is commonplace [30, 27, 31].
Ii-B Research Issues in Teamed Classifiers
Team Sparsity. An important consideration in teamed classifiers is sparsity of weight assignments from the gating function from Equation 1 to ensure any single sample x uses only a subset of models. Differently from the recently proposed sparse mixture-of-experts model 
, subspace model assignments in teamed classifiers are supervised. In the mixture-of-experts approach, the submodels and gating function for submodels are trained together and sparsity is enforced with a penalty term in the loss function. In our teamed classifier approach, we enforce sparsity by exploitingnaturally induced sparsity in our input space; for example, vehicle re-identification has naturally induced sparsity in the manufacturing process: a vehicle must be of a single type (sedan or SUV), and of a single brand (Toyota, Mazda). So, we construct a supervised gating function that ensures sparsity using this natural sparsity in the input space by detecting vehicle brand, then using a brand-specific expert. We again contrast with the sparse mixture-of-experts model, where gating functions and experts and trained together to let the gating function learn the subspace assignments without supervision [32, 33, 34]. Adding new subspaces or changing existing subspaces, as is the case with real-world drift [7, 31], requires retraining the entire mixture-of-experts. In the teamed classifier approach, we can train the gating function and experts independently, allowing us to more easily extend to new subspaces by creating new experts as and when required and training them independently of existing experts. Changes to an existing subspace require only updating that subspace’s assigned models.
Naturally Induced Sparsity. We consider the naturally induced sparsity in vehicle tracking. The vehicle tracking task requires clustering vehicle identities into disjoint groups such that all images of a single identity are identified as such. This research task involves two technical challenges: high inter-class similarity (two vehicles of the same model/year and color are visually the same by manufacturing process), and high intra-class variability (images of the same vehicle from different perspectives can look very different).
The naturally induced sparsity of the re-id task lies in high inter-class similarity, since we observe that the inter-class similarity problem is precisely due to the underlying manufacturing process; some examples of inter-class similarity clusters include groups of Toyota Corollas, black SUVs, or red vehicles. Conversely, existing vehicle re-id datasets such as VeRi-776  and VeRi-Wild  primarily focus on intra-class variability. Current approaches in vehicle re-id attempt to address inter-class similarity and intra-class variability in the same end-to-end model [16, 17, 18, 20]. This creates models that sacrifice performance on solving edge cases in intra-class variability to increase discriminative ability for inter-class similarity across the entire data space.
Iii The Teamed Classifier Approach
We introduce our teamed classifier approach for video analytics, specifically for vehicle re-identification. We will first describe a typical video analytics pipeline for re-identification. Then we describe our teamed classifier approach for vehicle re-identification and the advantages it brings over traditional single-model pipelines.
Iii-a Typical Pipeline for Vehicle Re-ID
A typical video surveillance framework for re-id comprises of a pipeline of increasingly fine-grained feature extractors. We show in Figure 1 a standard vehicle surveillance pipeline. Data enters the pipeline through a deployed camera network in the form of video streams. Each layer performs progressively finer-grained feature extraction for knowledge acquisition.
In the Object Detector layer, pretrained object detectors such as YOLO  or Mask-RCNN  are commonly used for vehicle, person, and sign detection. As we have discussed, the usual approach in current systems such as Kestrel and VideoStorm is to use a single model type for the entire data space. A notable exception is Chameleon , which uses a small team of detectors for changes in detection quality requirements: if high-quality detections are requested, a pretrained YOLO detector is used. If low-quality detections are requested, then simpler detectors like SIFT are used. Object detectors extract very coarse features, namely labels.
The Re-ID Model layer is the focus of typical re-id approaches, where a single end-to-end model is developed for fine-grained vehicle identity clustering. Details about these end-to-end models are provided in Related Work. Here we observe that some approaches do use submodels, such as OIFE 
; however these submodels are trained together and are each designed for the entire input space. Each submodel’s features are subsequently combined with an additional dense neural network to obtain final re-identification features. Re-id features are clustered to identify unique vehicle identities.
Iii-B Teamed Classifiers for Vehicle Re-ID
We now present our teamed classifier-based pipeline for a vehicle identification system in Figure 2. Our approach differs from the current methods described in Figure 1 by employing classifier teams as feature extractors, where each member of the team is assigned to a different region of the input space. We exploit the naturally induced sparsity of the input space to create disjoint teams with supervision.
Detector Team. We employ detector teams in lieu of cross-domain adaptive object detector models. A challenge in real-world object detection on multi-stream video networks is the sharp difference between frame artifacts generated by each camera or set of cameras. As the analysis of cross-domain performance in  shows, even visually similar images are difficult for feature extractors if they are captured in different environments. While there have been some studies in developing domain-adaptive techniques or more generalizable universal detector models , we employ the student-teacher model for object detection from . We use a pretrained full YOLOv3 model as the teacher, and train smaller, specialized detectors for each camera. The specialized models are built on SSD-MobileNets and can be deployed on embedded devices [40, 13]. Specialized detectors are covered in recent approaches; we focus on the identification layers.
Inter-Class Similarity Team. For vehicle re-id significant interest has been given towards developing models that can handle both inter-class similarity and intra-class variability, shown in Figure 3. In the former, vehicles with different identities (i.e. license plates) look very similar because they may be from the same brand, same vehicle type, or same color. A Re-ID model must therefore distinguish visually similar vehicles in the same camera using camera-specific artifacts such as spatio-temporal constraints or background information, while also capturing cross-camera features. In terms of implementation, a Re-ID model must generate a set of features for vehicles such that similar vehicles across multiple orientations, resolutions, and scales are projected to the same cluster, while ensuring features of different vehicles that have high inter-class similarity are projected to different clusters.
This naturally imposes orthogonal constraints on a re-id model, and fine-grained feature extraction is necessary to ensure a model can address both constraints. Thus, a model must be able to capture the full range of feature combinations in vehicles across multiple brands, orientations, colors, resolutions, and scales, as have been proposed in existing approaches. Consequently, existing approaches build complex networks that perform inter-class similarity discrimination, and intra-class variability minimization in the same model: OIFE  uses 20 stacked convolutional networks to extract human-labeled keypoints; RAM  builds three sub-networks to evaluate each section of a vehicle (roof, body, chassis); VAMI  creates multiple sub-models for each orientation; and QD-DLF  builds four networks to extract diagonal features.
While such approaches partially address the inter-class similarity and intra-class variability constraints, they make simple mistakes: we show in Figure 4 some mistakes in vehicle re-id provided by the Group Sensitive Triplet Embedding approach in . Similar examples are provided in other papers. We observe that forcing a re-id model to learn both inter-class similarity discrimination and intra-class variability minimization enforces a learning burden that reduces overall performance.
We again propose exploiting the naturally induced sparsity of the input space to reduce the burden of learning orthogonal feature extraction for the re-id model. Concretely, our teamed-classifier approach uses two layers of features extractors: an inter-class similarity team to perform coarse clustering of vehicle images using natural feature descriptions, followed by an intra-class variability team that assigns one re-id model to each cluster from the inter-class similarity team. This allows the re-id models in the intra-class variability team to focus on a subset of the input space of vehicles without enforcing a generalization constraint to address inter-class similarity.
Naturally Induced Sparsity. We use our observations from Figure 4 to build the inter-class similarity team; we select three key coarse features for enforcing the intra-class variability team sparsity – vehicle color, vehicle type, and vehicle model. We focus on vehicle model discrimination, since vehicle color and type are coarser, finite features addressed with simpler image classifiers as in the BoxCars116K models . For vehicle model discrimination, we consider the related zero-shot learning task. The zero-shot learning task requires learning feature extractors that can discriminate between classes seen during training and generalize to unseen classes not seen during training. We specifically focus on the Cars196 dataset, since it requires identifying unseen vehicle models using feature extraction on seen vehicle models. This is useful in re-id since new vehicle brands and updated vehicle models are continuously introduced, adding dataset drift to the input space.
We develop a zero-shot learning model that achieves state-of-the-art performance on the Cars196 dataset and use it for model discrimination. Our model implicitly learns relevant features for the unsupervised clustering of vehicle models. We describe our vehicle brand discriminator in Section IV-B.
Intra-Class Variability Team. With an inter-class similarity team to perform coarse-grained clustering, we can build our re-id models to focus on minimizing intra-class variability only. This provides two advantages:
Since our models only need to address intra-class variability on a limited subset of the true input space, we achieve higher performance in mAP and rank-1 retrieval compared to recent approaches.
We can build smaller models compared to recent approaches. As such, each member of the intra-class variability team uses a single ResNet 18 backbone and can operate in near real-time, compared to 20 stacked convolutional networks in , 5 ResNet backbones in , and 4 ResNet50 backbones in .
We describe our intra-class variability team’s base model in Section V-B.
Iv Intra-Class Similarity Team
We develop an end-to-end model to deploy as a submodel in the intra-class similarity team using a single backbone network. Our approach implicitly learns relevant local and global features for unsupervised clustering without relying on data and feature augmentation or synthetic data. We first describe the Cars196 dataset we use for evaluating our intra-class similarity team’s brand discrimination models.
Iv-a Dataset and Evaluation
The Cars196 dataset, introduced in , contains 196 classes of vehicles. It is challenging due to few images per class (on average, Cars196 has 82 images per class). Furthermore, vehicles exhibit a high degree of inter-class similarity as described in Section III-B, since most vehicles fit into a few form factors. We evaluate our models with two metrics: the normalized mutual information measure and the top-1 retrieval rate (we also show results for top-5 retrieval).
Normalized Mutual Information (NMI).
NMI measures clustering correlation between a predicted cluster set and ground truth cluster set; it is the ratio of mutual clustering information and the ground truth, and their harmonic mean. Given a set of predicted clustersunder a -means clustering, we say that each contains instances determined to be of the same class. With ground truth clusters , we calculate NMI as:
where is the entropy and is the mutual information between and . Since NMI is invariant to label index, no alignment is necessary.
Top-k Retrieval. We use the standard top- ranking retrieval accuracy for, calculated as the percentage of classes correctly retrieved at the first rank, and of those missed, percentage retrieved correctly on the second rank, and so on.
Iv-B Model for Brand Discrimination
We make the following observations in creating our intra-class similarity team’s submodel for brand discrimination:
It is well known that the earlier kernels in a convolutional network learn abstract, simple features such as colors. Some kernels also learn basic geometric shapes corresponding to image features in the low frequency range of images.
Later kernels learn more class-specific details and extract detailed features corresponding to higher-frequency features. While these are useful for traditional image classification, they may overfit on the ZSL task.
Convolutional layers are used for feature extraction, and subsequent dense, or fully-connected layers, used for feature interpretation. This forces the dense layers to learn image feature discrimination, instead of relying on convolutional filters. Since convolutional filters focus on nearby pixels with a spatial constraint, we believe relying on convolutional filters for feature interpretation to be more effective in tracking image invariant features.
Since the earlier layers learn coarse features, we propose using the early kernels with attention modules from  to improve feature extraction. Since the later layers in a convolutional network already learn fine-grained features, they do not require augmentation; otherwise they would begin to overfit on the training data and fail to generalize to unseen brands. Thus, we use convolutional attention on the early layers only, as opposed to attention throughout. We also address the loss of spatial image features in dense layers by removing them entirely and only use convolutional layers for both feature extraction and interpretation: given query and target, we evaluate their similarity on only convolutional features. In contrast to current approaches that use dense layers after the convolutional backbone to perform feature interpretation, we force the convolutional network to also learn feature interpretation simultaneously with feature extraction.
Backbone. We use ResNet-18 as the backbone for the intra-class similarity model. Each ResNet-18 backbone consists of several "basic blocks" chained together. We apply targeted attention to these basic blocks, as opposed to each convolutional layer.
Convolutional Attention. We use the CBAM attention module from  to add discriminative ability to the backbone. Since the earlier filters learn coarser features and the later filters learn fine-grained features, adding convolutional attention to all layers improves classification accuracy in general. However, in the brand discrimination task, we need to generalize to unseen brands; so CBAM on later layers causes networks to overfit on fine-grained features of the training set, reducing overall performance (we examine this performance drop in Section VI-B). We add attention only to the first basic block to learn discriminative coarse-grained filters for better unseen class separation and metric learning, improving performance in for Cars196 and brand discrimination in general.
The CBAM block is not sufficient to improve generalization due to skewed feature maps in early convolutional layers. The first convolutional layer is crucial in feature extraction since it occurs at the beginning of the network. We find that many feature maps at the first layers do not track any useful features; instead they either output random noise or focus on irrelevant features such as shadow. Therefore, we develop the Global Attention module (GA), shown in to perform feature map regularization. The GA module reduced feature map skew by re-weighting feature weights. Whereas CBAM separates channel and spatial attention, GA combines them to ensure spatial features are learned together. GA uses two
conv layers with Leaky ReLU activation to retain negative weights from the first conv layer in the backbone. The output is passed through a sigmoid activation and element-wise multiplied with the (see GA inset inFigure 5).
We avoid max and average pooling since they cause loss of information and we want to preserve discriminative features for the basic blocks in the architecture core. We show an example of feature map correction in Figure 6; the top layer shows original feature maps, which have skewed towards the shadow under the vehicle; the bottom layer shows the corrected feature map without skew after applying GA.
ProxyNCA Loss. Our brand discrimination model’s target is to learn a distance metric on vehicle brands such that features for vehicle brands are clustered in the output feature space. We train with the ProxyNCA loss for distance metric learning introduced in . With ProxyNCA, a model learns a set of proxies that map to training classes. Features from training-set images are mapped to the proxy-domain and the distance is maximized amongst the proxy clusters themselves. During inference, the proxies are ignored, and the true features are used to cluster vehicle brands.
V Intra-Class Variability Team
We now describe our base model re-id for the intra-class variability team. First, we describe the VeRi-776 dataset we use for evaluating our model. Then we will cover our intra-class variability results.
V-a VeRi-776 Dataset
The VeRi-776 dataset for vehicle re-id was introduced in  to promote research in the field. It contains 776 unique vehicle identities, with 576 used for training and the remaining 200 used for testing. During testing, the target is to retrieve the unseen identities given a query, with performance evaluated on the ranking. The dataset contains mostly intra-class variability, with each identity having several images from multiple cameras in different environmental conditions.
V-B Base Model for Re-ID
We now describe our base re-id model for the intra-class variability team. Since we offload inter-class similarity discrimination to the inter-class similarity team, our re-id models are simpler and smaller than typical re-id models, with better performance. The intra-class variability base model is robust, extensible, and fast, as we will show. We call it REF-GLAMOR, for reference-GLAMOR, where GLAMOR stands for Global Attention Module for Re-ID.
Base Model Construction. We follow similar principles in designing the re-id models that we used in the inter-class similarity brand discrimination model. Specifically, we use the ResNet-18 backbone with GA. Differently from the inter-class similarity models:
We do not use CBAM on the re-id models. Since CBAM has already been applied to the first basic block in inter-class similarity models, CBAM on the first basic block in a re-id model would perform redundant feature extraction. CBAM on the last basic block would lead to re-id model overfit on the training data.
We use the warmup learning rate suggested in  to gradually increase the learning rate during training and improve feature extraction on pretrained backbones.
Triplet Loss. We use the standard triplet loss for distance metric learning on the features:
where ,, are the anchor, positive, and negative of a triplet, is the embedding network, and is the margin constraint enforcing the minimum distance difference between two images from the save identity (anchor and positive) compared to two images from distinct identities (anchor and negative). The triplet loss generates a mapping such that individual identities are mapped to the same point.
To validate our approach, we evaluate each novel contribution: the inter-class similarity team for feature discrimination, and the intra-class variability team for re-id feature extraction. We compare our approaches to the state-of-the-art.
Vi-a Brand Detection: Evaluation on Cars196
Each submodel of the inter-class similarity team performance inter-class similarity clustering on natural features to enforce naturally induced sparsity on the subsequent intra-class variability team. As described in Section IV-A, we evaluate on the well-known Cars196 dataset with inter-class similarity.
Evaluation Results. We evaluate NMI and Rank-1 on the Cars196 dataset and compare to recent approaches in the following table. We examine the impact of CBAM on different basic blocks to support our final model construction by testing different architectures: CBAM on all blocks, CBAM on the first block, and CBAM on the final block of ResNet.
Interestingly, addition of CBAM throughout the network reduces performance, since the later basic blocks begin overfitting on the fine-grained features that appear exclusively on the training set. We support this observation with results from CBAM-1, where CBAM is applied to the first basic block, and CBAM-4, where CBAM is applied to the final (or fourth) basic block. The results support our observations in Section IV-B - CBAM-4 has even worse performance, while CBAM-1 increases performance beyond CBAM everywhere.
On our model with CBAM-1 and global attention, we achieve state-of-the-art results (Table I), with NMI 66.03% and Rank-1 of 82.75% (10% better than ).
Vi-B Re-ID: Evaluation on VeRi-776
We now show performance of REF-GLAMOR for the intra-class variability minimization. We evaluate our overall model described in Section V-B on the VeRi-776 dataset and compare to recent approaches. To evaluate the impact of IA, we compare performance to the re-id base model without global attention, and with global attention in Table II.
REF-GLAMOR uses only a ResNet-18 backbone and achieves impressive performance compared to existing approaches that use multiple feature extractors. Since we do not need to perform inter-class similarity discrimination, REF-GLAMOR based models perform well on their respective subsets of the input space. Furthermore, performance is significantly improved with the inclusion of the GA module, since the first convolutional layer in the backbone has reduced feature map skew. We show an example in Figure 7, where GA reduces noisy or bad kernels in the first layer.
Here we discuss qualitative aspects of our teamed classifier approach for a vehicle ID framework.
Vii-a Robustness of Re-ID Team
|REF-GLAMOR||ResNet18 backbone + GA module||
Our team for vehicle re-id is robust to multi-scale, multi-orientation images. This is evident from our results in Table II, where we show state-of-the-art mAP compared to existing approaches. While the CMC-1 is lower than a few methods like GSTRE  and MTML-OSG , mAP is a better measure of robustness. Compared to top-k retrieval, which measures the recall at k-th ranking, mAP measures overall ranking quality by measuring fraction of true-positives in the retrieved results across all queries. Higher CMC-1 indicates the first result retrieved is relevant. Higher mAP indicates most top-ranked results retrieved are relevant. As such, it is a stronger measure of robustness. Our model with GA achieves overall robust performance at mAP 71.08, compared to the next best mAP of 64.6 from .
Vii-B Extensibility of Teamed Classifiers
We have already discussed existing ensemble-based approaches (boosting/stacked ensembles) and the more recent mixture-of-experts ensembles. Our motivation in proposing teamed classifiers comes from our observation that several real-world domains have naturally induced sparsity.
Compared to the sparse mixture-of-experts model, which must learn the underlying sparse regions of the input space, our teamed classifier approach uses supervision to guarantee sparsity in the classifier teams. Furthermore, by separately training the gating function for brands/color and classifier teams for re-id, our pipeline is more extensible to new knowledge. Whereas the sparse mixture-of-experts must be retrained to handle new types of knowledge, our approach simply adds a new gate in the form of a new member to the intra-class similarity team, and corresponding classifiers for that gate to the re-id team. Further, as we demonstrated in the intra-class similarity evaluation, our individual team members handle unseen concepts.
Consequently, our re-id models in the re-id team need to only address intra-class variability. When inter-class similarity discrimination is performed by the intra-class similarity team, the gating models select which members of the re-id team will be used in re-ID. We show an example in the teamed classifier pipeline in 2, where we depict three re-id teams (among many): the Toyota Brand team, the Red Vehicle Team, and the Black SUV Team. If a color discriminator model identifies a red vehicle, its respective re-id team is used to extract identification features. This allows us to reduce instances of re-id mistakes, as shown in Figure 4.
Vii-C Real-time Performance
By offloading inter-class similarity discrimination to the intra-class similarity team, we are able to make our re-id models smaller than existing approaches. We show in Table III the approximate number of parameters in current approaches and our own, along with overall mAP.
Since our re-id models use ResNet18, they can deliver real-time performance for vehicle tracking without GPUs, with reduced parameter ResNet18  achieving 50FPS on CPU.
We have presented a new approach for conditional computation with teamed classifiers for vehicle tracking and identification. We describe an end-to-end approach for vehicle tracking, attribute extraction, and identification using teamed classifiers. With our teamed classifier approach, we build dynamic ensembles for feature extraction that are select at inference time. Similar to the mixture-of-experts model, we build a conditional network with sparsity that gates access to the dynamic ensembles. During pipeline construction, we build teams of models such that each member is assigned to a region of the input space. During inference, we determine the region of input space an input belongs to and dynamically select the team members for feature extraction. Unlike the mixture-of-experts model, where the sparsity constraint is enforced with training, our teamed classifier approach exploits the naturally induced sparsity of the input space in vehicle tracking to perform supervised team generation and gating.
We implement teamed classifiers for object detection (detector team models with camera-specialized detectors), vehicle attribute extraction, and vehicle identification. Since we adapt student teacher networks for the detector team and standard image classifiers for some attribute extractors, we focus evaluation on the novel team models: the brand discrimination team and the re-id models. We demonstrate state-of-the-art performance on each task, and show the advantages of our teamed classifier approach in Section VII, where we contrast the performance improvement of our approach with the reduced number of parameters (and consequently, operations), compared with current methods.
This research has been partially funded by National Science Foundation by CISE’s SAVI/RCN (1402266, 1550379), CNS (1421561), CRISP (1541074), SaTC (1564097) programs, an REU supplement (1545173), and gifts, grants, or contracts from Fujitsu, HP, Intel, and Georgia Tech Foundation through the John P. Imlay, Jr. Chair endowment. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or other funding agencies and companies mentioned above.
-  J. Jiang, G. Ananthanarayanan, P. Bodik, S. Sen, and I. Stoica, “Chameleon: scalable adaptation of video analytics,” in Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication. ACM, pp. 253–266.
-  H. Qiu, X. Liu, S. Rallapalli, A. J. Bency, K. Chan, R. Urgaonkar, B. Manjunath, and R. Govindan, “Kestrel: Video analytics for augmented multi-camera vehicle tracking,” in 2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI). IEEE, pp. 48–59.
-  H. Zhang, G. Ananthanarayanan, P. Bodik, M. Philipose, P. Bahl, and M. J. Freedman, “Live video analytics at scale with approximation and delay-tolerance,” in 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), pp. 377–392.
-  S. Peleg, Y. Pritch, A. Rav-Acha, and A. Gutman, “Method and system for video indexing and video synopsis,” 2012, uS Patent 8,311,277.
-  S. Ren, B. Liao, W. Zhu, and K. Li, “Knowledge-maximized ensemble algorithm for different types of concept drift,” Information Sciences, vol. 430, pp. 261–281, 2018.
J. Shan, H. Zhang, W. Liu, and Q. Liu, “Online active learning ensemble framework for drifted data streams,”IEEE transactions on neural networks and learning systems, no. 99, pp. 1–13, 2018.
-  A. Suprem and C. Pu, “Assed: A framework for identifying physical events through adaptive social sensor data filtering,” in Proceedings of the 13th ACM International Conference on Distributed and Event-based Systems. ACM, pp. 115–126.
-  Y. Duan, L. Chen, J. Lu, and J. Zhou, “Deep embedding learning with discriminative sampling policy,” in CVPR, pp. 4964–4973.
-  Y. Movshovitz-Attias, A. Toshev, T. K. Leung, S. Ioffe, and S. Singh, “No fuss distance metric learning using proxies,” in ICCV, pp. 360–368.
-  A. Kanaci, M. Li, S. Gong, and G. Rajamanoharan, “Multi-task mutual learning for vehicle re-identification,” in CVPR Workshops, pp. 62–70.
-  X. Wang, A. Shrivastava, and A. Gupta, “A-fast-rcnn: Hard positive generation via adversary for object detection,” in CVPR, pp. 2606–2615.
-  J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
Z. Tang, G. Wang, H. Xiao, A. Zheng, and J.-N. Hwang, “Single-camera and inter-camera vehicle tracking and 3d speed estimation based on fusion of visual and semantic features,” inCVPR, pp. 108–115.
-  R. Kumar, E. Weill, F. Aghdasi, and P. Sriram, “Vehicle re-identification: an efficient baseline using triplet embedding,” arXiv preprint arXiv:1901.01015, 2019.
-  X. Liu, S. Zhang, Q. Huang, and W. Gao, “Ram: a region-aware deep model for vehicle re-identification,” in ICME. IEEE, pp. 1–6.
-  Y. Lou, Y. Bai, J. Liu, S. Wang, and L.-Y. Duan, “Embedding adversarial learning for vehicle re-identification,” IEEE Transactions on Image Processing, 2019.
-  J. Peng, H. Wang, and X. Fu, “Cross domain knowledge learning with dual-branch adversarial network for vehicle re-identification,” arXiv preprint arXiv:1905.00006, 2019.
-  Y. Zhou and L. Shao, “Aware attentive multi-view inference for vehicle re-identification,” in CVPR, pp. 6489–6498.
J. Zhu, H. Zeng, J. Huang, S. Liao, Z. Lei, C. Cai, and L. Zheng, “Vehicle re-identification using quadruple directional deep learning features,”IEEE Transactions on Intelligent Transportation Systems, 2019.
-  J. T. Zhou, J. Du, H. Zhu, X. Peng, Y. Liu, and R. S. M. Goh, “Anomalynet: An anomaly detection network for video surveillance,” IEEE Transactions on Information Forensics and Security, 2019.
-  M. Ye, X. Peng, W. Gan, W. Wu, and Y. Qiao, “Anopcn: Video anomaly detection via deep predictive coding network,” in ACM Int’l Conf on Multimedia. ACM, pp. 1805–1813.
-  Z. Wang, L. Tang, X. Liu, Z. Yao, S. Yi, J. Shao, J. Yan, S. Wang, H. Li, and X. Wang, “Orientation invariant feature embedding and spatial temporal regularization for vehicle re-id,” in ICCV, pp. 379–387.
A. Kanacı, X. Zhu, and S. Gong, “Vehicle re-identification in context,” in
German Conference on Pattern Recognition. Springer, pp. 377–390.
-  Y. Huang, P. Peng, Y. Jin, J. Xing, C. Lang, and S. Feng, “Domain adaptive attention learning for unsupervised cross-domain person re-identification,” arXiv preprint arXiv:1905.10529, 2019.
Y.-J. Li, F.-E. Yang, Y.-C. Liu, Y.-Y. Yeh, X. Du, and Y.-C. Frank Wang, “Adaptation and re-identification network: An unsupervised deep transfer learning approach to person re-identification,” inCVPR Workshops, pp. 172–178.
-  D. Kang, P. Bailis, and M. Zaharia, “Blazeit: Fast exploratory video queries using neural networks,” arXiv preprint arXiv:1805.01046, 2018.
-  B. Recht, R. Roelofs, L. Schmidt, and V. Shankar, “Do cifar-10 classifiers generalize to cifar-10?” arXiv preprint arXiv:1806.00451, 2018.
-  ——, “Do imagenet classifiers generalize to imagenet?” arXiv preprint arXiv:1902.10811, 2019.
T. R. Hoens, R. Polikar, and N. V. Chawla, “Learning from streaming data with
concept drift and imbalance: an overview,”
Progress in Artificial Intelligence, vol. 1, no. 1, pp. 89–101, 2012.
-  I. Žliobaitė, M. Pechenizkiy, and J. Gama, An overview of concept drift applications. Springer, 2016, pp. 91–114.
-  N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” arXiv preprint arXiv:1701.06538, 2017.
-  X. Wang, J. Wu, D. Zhang, Y. Su, and W. Y. Wang, “Learning to compose topic-aware mixture of experts for zero-shot video captioning,” in AAAI, vol. 33, pp. 8965–8972.
-  X. Wang, F. Yu, L. Dunlap, Y.-A. Ma, R. Wang, A. Mirhoseini, T. Darrell, and J. E. Gonzalez, “Deep mixture of experts via shallow embedding,” arXiv preprint arXiv:1806.01531, 2018.
X. Liu, W. Liu, T. Mei, and H. Ma, “A deep learning-based approach to
progressive vehicle re-identification for urban surveillance,” in
European Conference on Computer Vision. Springer, pp. 869–884.
-  Y. Lou, Y. Bai, J. Liu, S. Wang, and L. Duan, “Veri-wild: A large dataset and a new method for vehicle re-identification in the wild,” in CVPR, pp. 3235–3243.
-  P. Peng, T. Xiang, Y. Wang, M. Pontil, S. Gong, T. Huang, and Y. Tian, “Unsupervised cross-dataset transfer learning for person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1306–1315.
-  X. Wang, Z. Cai, D. Gao, and N. Vasconcelos, “Towards universal object detection by domain attention,” in CVPR, pp. 7289–7298.
-  G. Chen, W. Choi, X. Yu, T. Han, and M. Chandraker, “Learning efficient object detection models with knowledge distillation,” in NeurIPS, pp. 742–751.
-  A. Heredia and G. Barros-Gavilanes, “Video processing inside embedded devices using ssd-mobilenet to count mobility actors,” in 2019 IEEE Colombian Conference on Applications in Computational Intelligence (ColCACI). IEEE, pp. 1–6.
-  Y. Bai, Y. Lou, F. Gao, S. Wang, Y. Wu, and L.-Y. Duan, “Group-sensitive triplet embedding for vehicle reidentification,” IEEE Transactions on Multimedia, vol. 20, no. 9, pp. 2385–2399, 2018.
-  J. Sochor, J. Špaňhel, and A. Herout, “Boxcars: Improving fine-grained recognition of vehicles using 3-d bounding boxes in traffic surveillance,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 1, pp. 97–108, 2018.
-  J. Krause, M. Stark, J. Deng, and L. Fei-Fei, “3d object representations for fine-grained categorization,” in ICCV Workshops, pp. 554–561.
-  S. Woo, J. Park, J.-Y. Lee, and I. So Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19.
-  H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person re-identification,” in CVPR.
-  Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” arXiv preprint arXiv:1708.04896, 2017.
-  H. Oh Song, Y. Xiang, S. Jegelka, and S. Savarese, “Deep metric learning via lifted structured feature embedding,” in CVPR, 2016, pp. 4004–4012.
-  Y. Duan, L. Chen, J. Lu, and J. Zhou, “Deep embedding learning with discriminative sampling policy,” in CVPR, 2019, pp. 4964–4973.
-  Y. Shen, T. Xiao, H. Li, S. Yi, and X. Wang, “Learning deep neural networks for vehicle re-id with visual-spatio-temporal path proposals,” in ICCV, pp. 1900–1909.
I. Rieger, T. Hauenstein, S. Hettenkofer, and J.-U. Garbas, “Towards real-time head pose estimation: Exploring parameter-reduced residual networks on in-the-wild datasets,” inInternational Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. Springer, pp. 123–134.