The research related to vehicles has attracted wide attention and made some progress in the field of computer vision, such as vehicle detectionHu et al. (2018); Chu et al. (2018), tracking Tang et al. (2019); Fang et al. (2019) and classification Hu et al. (2017); Ma et al. (2019). Different from the tasks above, the purpose of vehicle reID is to accurately match the target vehicle captured from multiple non-overlapping cameras, which is of great significance to intelligent transportation. Meanwhile, the large amount of video or images could be processed automatically carried out by vehicle reID to exploit the meaningful information, which plays an important role in modern smart surveillance systems.
With the recent development of deep learning, lots of excellent deep learning-based methodsKhorramshahi et al. (2019); He et al. (2019); Zhu et al. (2019); Guo et al. (2018) are proposed for the vehicle reID task. However, there still exist many limitations for the application in the real-world. Different with the person reID Wu et al. (2018c, 2019b, 2019a); Li et al. (2017); Wu et al. (2018a) and fine-grained classification Wang et al. (2018); Yang et al. (2018); Wu et al. (2018b); Wang et al. (2017a); Wu et al. (2018d) that could extract rich features from the images with various poses and colors, the vehicles are generally rigid structure with solid colors and appearance is easily affected by various illuminations, viewpoints. Most existing works only focus on learning the discriminative features while neglecting the influence of different cameras. Actually, images captured from different cameras often have obviously different styles. Usually, cameras differ from each other regarding resolution, illumination, background, etc. As Fig. 1 shows, for each row, the images with the same identity have different appearances in different camera views. This could lead to serve cross-camera bias and affect the vehicle reID task. Some vehicle reID researches also noticed the challenges, thus preferred to make use of spatial-temporal information and plate license to achieve promising results. However, the spatial-temporal information is usually not annotated in some datasets. Besides that, the high-resolution images of in front or rear viewpoints are required for license plate recognition, which is not impractical in the real-world scenes.
In order to solve these problems, some methods consider learning the global features from multi-view images, such as VAMI Zhou and Shao (2018) and DHMV Zhou et al. (2018). VAMI adopts cross-view generative adversarial network to transform the features into a global multi-view feature representation. DHMV aims to learn transformations across different viewpoints for inferring the multi-view representation from one input vehicle image. There are also some methods that exploit the constraint among cross-cameras by proposing the cross-view losses. For instance, MVR loss Lin et al. (2019a) introduced several latent groups to represent multiple views and ranked them by calculating the intra and inter loss. However, these methods only consider the influence of various viewpoints to solve the problem between different cameras and neglect the background and other factors.
The above issues prompt us to focus on the changes of images caused by different cameras. To solve aforementioned problems, this paper proposes a cross-camera adaptation network (CCA) to smooth the bias between different cameras and learn powerful features. In our paper, the single camera is regard as an independent domain. CCA aims at transforming the multi-domains into one common domain that has the similar background, illumination and resolution. Different with existing methods, CCA firstly generates vehicle images by StarGAN, which transfers the same vehicle images from other cameras into one camera and doesn’t augment the quantity of original datasets. Besides, it could be observed in the Fig. 1, the images captured from different always have different backgrounds, which may interfere with the training of vehicle reID model. Hence, to eliminate the impact of background, the attention alignment network (AANet) is proposed to locate discriminative features. Specially, the STN with attention module is employed to select a series of regions from vehicle images for training a powerful reID model. The main contributions of our work can be summarized as follows
A cross-camera adaptation framework is proposed for better smoothing the bias between different cameras, which reduces the influence of illumination, background and resolution for vehicle reID task by transferring images into a common space and learning a powerful discriminative feature.
The attention alignment network is proposed to obtain a series of local regions for vehicle reID, which focus on locating the meaningful parts while suppressing background. Moreover, Extensive experiments demonstrate that our proposed method achieves competitive performance on challenging benchmark datasets.
The rest of this paper is organized as follows. In Section 2, we review and discuss the related works. Section 3 illustrates the proposed method in detail. Experimental results and comparisons on two vehicle reID datasets are discussed in Section 4, followed by conclusions in Section 5.
2 Related Works
In this section, existing vehicle reID works are reviewed. With the prosperity of deep learning, vehicle reID has achieved some progress in recent years. Broadly speaking, these approaches could be categorized into three classes, i.e., representation learning, similarity learning and spatio-temporal correlation learning.
A series of methods attempt to identify vehicles based on the visual appearance. In Zapletal and Herout (2016), 3D bounding boxes of vehicles were detected and then were processed by the color histograms and histograms of oriented gradients for vehicle reID. In Zhao et al. (2019), a ROIs-based vehicle reID method was proposed to detect the ROIs’ as discriminative identifiers. And then encode the structure information of a vehicle for reID task. DHMVI Zhou et al. (2018) utilized the LSTM bi-directional loop to learn transformations across different viewpoints of vehicles, which could infer all viewpoints’ information from the only one input view. RAM Liu et al. (2018) was proposed for vehicle reID task with several branches including region and attribute branches to extract distinct features from several overlapped local regions. MRM Peng et al. (2019) introduced a multi-region model to extract features from a series of local regions for learning powerful features for vehicle reID task. EALN Lou et al. (2019) was introduced to improve the capability of the reID model by automatically generating hard negative samples in the specified embedding space to train the reID model. VAMI Zhou and Shao (2018) tried to better optimize the reID model by transforming single-view feature into a global multi-view feature representation through generative adversarial network. CV-GAN Zhou and Shao (2017) was conducted to generate the various viewpoints vehicle images by generative adversarial network for training an adaptive reID model.
Apart from the visual appearance, a series of metric losses for deep feature embedding to achieve higher performance. InLiu et al. (2016a), coupled cluster loss was proposed to push those negative ones far away and pull the positive images closer, which could minimize maximize inter-class distance and intra-class distance to train the vehicle reID model. GST loss Bai et al. (2018)
was proposed to deal with intra-class variance in learning representation. Besides that, it introduced the mean-valued triplet loss to alleviate the negative impact of improper triplet sampling during training stage. MGRGuo et al. (2019) was presented to further enhanced the discriminative ability of reID model by enhancing the discrimination that not only between different vehicles but also different vehicle models.
Besides, spatio-temporal information is an important cue for vehicle reID task. Hence, some approaches exploit spatial and temporal information for vehicle images to improve vehicle reID performance. PROVID Liu et al. (2017) employed visual features, spatial-temporal relations and the information of license plates with a progressive strategy to learn similarity scores between vehicle images. OIFE Wang et al. (2017b)
refined the retrieval results of vehicles by utilizing the log-normal distribution to model the spatio-temporal constrains in camera networks. Siamese-Cnn+Path-LSTMShen et al. (2017) model was proposed to incorporate complex spatio-temoral information for regularizing the reID results.
3 Cross-camera Adaptation Framework
The overall structure of the proposed framework is depicted in Fig.2. The Cross-camera Adaptation Framework (CCA) is composed of the camera transfer adversarial network and the attention alignment network. Firstly, the samples from different cameras are transferred into one domain by the camera transfer adversarial network. Then the images with similar distribution could be obtained, which are fed into the proposed attention alignment feature learning network for training the reID task. Specially, the attention alignment network is a dual-branches network that focus on different meaningful parts of vehicle images for improving the discriminate ability of reID model.
In this section, we introduce our method from two aspects: 1) an camera transfer adversarial network is introduced in section 3.2, which learns transfer mappings for different cameras; 2) an attention alignment feature learning network AANet is illustrated in section 3.3, which optimizes the reID model utilizing the generated images from the camera transfer adversarial network.
3.1 Camera Transfer Adversarial Network
The same vehicles always have different appearances in different camera views and the bias is shown in Fig.1. In this paper, to smooth the bias between different cameras, we want to transfer images in different cameras into one camera, which means that all images have the similar distribution. To achieve this, StarGAN Choi et al. (2018) is utilized as the camera transfer adversarial network. StarGAN Choi et al. (2018) utilizes generator and discriminator to implement the conversion between multiple cameras, which learns the mapping relations among multiple cameras using only a single model, as shown in Fig.3.
In StarGAN, in order to generate a more realistic fake sample, an adversarial loss function is employed to obtain the high-quality image, which could be written as:
where generates an image to fake . tries to distinguish the real image from the generated image. The target of StarGAN is to translate to an output images
that is classified as the target domain. For this goal, a domain classifier is added on the , which could be defined as:
is the probability distribution over the camera labels of a given real image, and represents the source camera labels. To guarantee that generated images could preserve the identity information of original images, StarGAN employs the cycle consistent loss Zhu et al. (2017), which is defined as:
Through StarGAN, one image could be transferred into any other cameras. Hence, there are times images than original dataset. is the number of cameras. However, in our paper, we aim to transfer images into one common domain. So we just select images from one camera for training vehicle reID model. As illustrated in Fig.1, the images with irrelevant background or less discriminative parts of objects of interest may confuse the reID model, which would degenerate the model’s performance. To solve this problem, in this paper, AANet is proposed to utilize the style-translated images as training set to guide the reID model to focus on the discriminative parts, to be detailed in Section 3.2.
3.2 Attention Alignment Network
Redundancy of background information is another important factor that obstructs vehicle reID performance. Based on the transferred images, we propose the attention alignment network (AANet) to reduce the discrepancy of attention maps across non-overlapping cameras. The AANet is designed to focus on the meaningful parts of vehicle images and neglect the background when training the reID model, which is illustrated in Fig.4.
The AANet is designed as a multi-branch structure, which is composed of one global branch and two local region branches. For the global branch, it is utilized to learn the context features with the attention module. Besides that, for the local regions, in order to obtain key information from the local region, we divided the output feature map generated by several convolutional layers with the size of into two non-overlapping local regions, which could be named as “Upper-Local” and “Lower-Local”, respectively. Then, the feature maps are fed into two branches to generate different features. So given an input vehicle image, the local region network could generate a series of features for vehicle ReID.
Specially, to address the problems of excessive background and extract the remarkable features, in each branch, an alignment module based on the STN is employed. The alignment module includes three components, a localization network to learn the transformation parameters, a grid generator to calculate the coordinate of the input feature maps by applying the transformation parameters and bilinear sampler to make up the missing pixels. Meanwhile, as shown in Fig.4
, to focus on the meaningful parts of vehicle images and neglect the background when training the feature learning model, an attention module is introduced to generate discriminative features. In the attention module, after a global average pooling layer, we employ the Softmax layer to re-weight the feature maps and generate the mask, which could be computed as:
where the operator is convolution. The is the weight matrix. After obtaining , the attended feature map could be calculated by . The operator is performed in an element-wise product. Then the attended feature map is fed into the subsequent structure.
The structure of global branch is also a two-branch network that is introduced in Zheng et al. (2018). In our paper, for one branch, as shown in Fig.5, the ResNet50 He et al. (2016) is adopted as the base model for vehicle classification, which consists of residual units that preserve the identity and maintain a deeper structure. After convolutional layers from through
, the feature vectorcould be obtained. Similar with the local region branch, the features is fed into the attention module to obtain distinct features. Then the output feature is utilized to train the identification task with the cross-entropy (CE) loss.
At last, for all branches, the obtained features are named as , and , respectively. During the training phase of each branch in local region features learning network, Fully Connected (FC) layers are added to identify vehicles only with a part of feature maps as input. This procedure enforces the network to extract discriminative details in each part. At last, the prediction identity classification is given by the FC layer with the CE loss that could be described as:
where denotes the parameters in the deep model. , and
represent the identification loss in global local region features extraction module, respectively.and are the weights for corresponding loss.
The CE loss is calculated based on softmax which is formulated as follows:
where is the -th deep feature that belongs to the -th class. For different datasets, represents the size of mini-batch and is the number of class of training set. is the dimension of the output feature. represents the bias term and denotes the -th column of the weights Wen et al. (2016).
Specially, during the test phase, the final features from AANet could be described as follows:
where is the weight for features. The size of features from different branches is in our paper.
In this section, we evaluate our proposed method for vehicle ReID using the Cumulative Match Characteristic (CMC) curve and mean Average Precision (mAP) Lin et al. (2019b); Wu et al. (2020) widely adopted in vehicle ReID. Besides comparing with state-of-the-art vehicle ReID methods, a series of detailed studies are conducted to explore the effectiveness of proposed method. All the experiments are conducted on two vehicle ReID datasets: VeRi-776 Liu et al. (2017) and VehicleID Liu et al. (2016a).
4.1 Datasets and Evaluation Metrics
VeRi-776 Liu et al. (2017). The dataset is a large-scale urban surveillance vehicle dataset for reID, which contains over 50,000 images of 776 vehicles across 20 cameras. Each vehicle is from 2-18 cameras with various viewpoints, illuminations and occlusions. In this dataset, 37,781 images of 576 vehicles are split as a train set and 11,579 images of 200 vehicles are employed as a test set. A subset of 1,678 images in the test set generates the query set.
VehicleID Liu et al. (2016a). It is a widely-used vehicle reID dataset, which contains 26267 vehicles and 221763 images in total. The training set contains 110,178 images of 13,134 vehicles. For the testing data, three subsets which contain 800, 1600, and 2400 vehicles are extracted for vehicle search in different scales. During testing phase, one image is randomly selected from one class to obtain a gallery set with 800 images, then the remaining images are all utilized as probe images. Two other test sets are processed in the same way.
4.2 Implementation Details
For the translation module, the model is trained in the pytorchPaszke et al. (2017). We utilize the Adam optimizer Li et al. (2014) with and
. The initial learning rate is 0.0001 for the first 100 epochs and linearly decays to the learning to 0 over the next 100 epochs. The batch size is 16. For the feature learning network, we implement the proposed vehicle reID model in the MatconvnetVan Der Maaten (2014) framework. SGD Bottou (2010) is employed to update the parameters of the network with with a momentum of during the training procedure on both VehicleID and VeRi-776. The batch size is set to 16. Besides that, the learning rate of the first 40 epochs is set to 0.1 while the last 25 is 0.01.
4.3 Comparison with the state-of-the-art methods
4.3.1 Comparison on VeRi-776
The results of the proposed method is compared with state-of-the-art methods on VeRi-776 dataset in Tables 1 2, which includes: (1) LOMO Liao et al. (2015); (2) DGD Xiao et al. (2016); (3) GoogLeNet Yang et al. (2015) (4) FACT+Plate-SNN+STR Liu et al. (2016b); (5) NuFACT+Plate-REC Liu et al. (2017); (6) PROVID Liu et al. (2017); (7) Siamese-Visual Shen et al. (2017); (8) Siamese-Visual+STR Shen et al. (2017); (9) Siamese-CNN+Path-LSTM Shen et al. (2017); (10) OIFE+ST Wang et al. (2017b); (11) VAMI Zhou and Shao (2018); (12) VAMI+ST Zhou and Shao (2018). From the Tables 1 2, it should be noted that the proposed method achieves the best performance among the compared with methods with rank-1 = 91.71%, mAP = 68.05% on VeRi-776, which acquires the highest mAP and rank-1 among all methods under comparisons. More details are analyzed as follows.
Firstly, the proposed AGNet obtains much better performance than those hand-crafted feature representation methods, such as LOMO Liao et al. (2015) and DGD Xiao et al. (2016), which achieves 58.41 and 50.13 points in mAP improvements, respectively. This verifies that the features obtained from deep model are more robust than the hand-crafted feature that are severely affected by the complicated environment.
Second, compared with those methods that learn multi-view features, the proposed also show satisfactory performance. For instance, compared with VAMI, our method has a gain of 17.92 in terms of mAP and 14.68 in terms of rank-1 accuracy. This is because that our method eliminates background interference information. It strongly proves that the bias between camera has a serve influence on the vehicle reID task.
Thirdly, although our proposed method only utilizes visual information, it also has significant improvements when compared with methods with spatio-temproal information. such as FACT+Plate-SNN+STR Liu et al. (2016b), PROVID Liu et al. (2017), Siamese-Visual+STR Shen et al. (2017), Siamese-CNN+Path-LSTM Shen et al. (2017), OIFE+ST Wang et al. (2017b) and VAMI+ST Zhou and Shao (2018), the proposed method has higher mAP, rank-1 and rank-5 than them, which demonstrates that our AGNet could extract more discriminative features without other information besides the vehicle images.
|LOMO Liao et al. (2015)||9.64||25.33||46.48|
|DGD Xiao et al. (2016)||17.92||50.70||67.52|
|GoogLeNet Yang et al. (2015)||17.81||52.12||66.79|
|FACT+Plate-SNN+STR Liu et al. (2016b)||27.77||61.64||78.78|
|NuFACT+Plate-REC Liu et al. (2017)||48.55||76.88||91.42|
|PROVID Liu et al. (2017)||53.42||81.56||95.11|
|Siamese-Visual Shen et al. (2017)||29.48||41.12||60.31|
|Siamese-Visual+STR Shen et al. (2017)||40.26||54.23||74.97|
|Siamese-CNN+Path-LSTM Shen et al. (2017)||58.27||83.49||90.04|
|OIFE+ST Wang et al. (2017b)||51.42||68.30||89.70|
|VAMI Zhou and Shao (2018)||50.13||77.03||90.82|
|VAMI+ST Zhou and Shao (2018)||61.32||85.92||91.84|
|Method||Test size = 800||Test size = 1600||Test size = 2400|
|BOW-SIFT Liu et al. (2016b)||-||2.81||4.23||-||3.11||5.22||-||2.11||3.76|
|LOMO Liao et al. (2015)||-||19.76||32.14||-||18.95||29.46||-||15.26||25.63|
|DGD Xiao et al. (2016)||-||44.80||66.28||-||40.25||65.31||-||37.33||57.82|
|VGG+T Liu et al. (2016a)||-||40.4||61.7||-||35.4||54.6||-||31.9||50.3|
|VGG+CCL Liu et al. (2016a)||-||43.6||64.2||-||42.8||66.8||-||32.9||53.3|
|Mixed DC Liu et al. (2016a)||-||49.0||73.5||-||42.8||66.8||-||38.2||61.6|
|FACT Liu et al. (2017)||-||49.53||67.96||-||44.63||64.19||-||39.91||60.49|
|NuFACT Liu et al. (2017)||-||48.90||69.51||-||43.64||65.34||-||38.63||60.72|
|OIFE Wang et al. (2017b)||-||-||-||-||-||-||-||67.0||82.9|
|VAMI Zhou and Shao (2018)||-||63.12||83.25||-||52.87||75.12||-||47.34||70.29|
|TAMR Guo et al. (2019)||67.64||66.02||79.71||63.69||62.90||76.80||60.97||59.69||73.87|
4.3.2 Comparison on VehicleID
There are 9 methods are compared with our proposed method, which are (1) LOMO Liao et al. (2015); (2) DGD Xiao et al. (2016); (3) VGG+T Liu et al. (2016a); (4) VGG+CCL Liu et al. (2016a); (5) Mixed DC Liu et al. (2016a); (6) FACT Liu et al. (2017); (6) NuFACT Liu et al. (2017); (7) OIFE Wang et al. (2017b); (8) VAMI Zhou and Shao (2018); (9) TAMR Guo et al. (2019). Table .. illustrates the rank-1, rank-5 and mAP of our method and other comparison methods on VehicleID. Firstly, it can be observed that deep learning based methods obviously outperform traditional methods. And compared with traditional methods LOMO Liao et al. (2015) and DGD Xiao et al. (2016), the proposed method has 55.75% and 30.71% gains in rank-1 on the test set 800, respectively. The similar improvements also occur on other test sets. Secondly, Different VeRi-776, there is no spatio-temporal labels in VehicleID. Hence, there are no methods that consider the spatio-temporal information. All compared methods utilize the appearance information only from vehicle images. The proposed method outperforms all deep learning based methods under comparison on the test sets with different sizes on VehicleID, which obtains 75.51%, 73.60%, 70.08% in rank-1, respectively. And this also shows that our proposed method could generate more distinct features for different vehicle reID datasets.
4.4 Evaluation of proposed method
To validate the necessity of the proposed method, some ablation experiments are conducted. The comparison results on VeRi-776 and VehicleID are presented in Table 3 and Table 4. “Original” means the the training set is original samples while “Transfer” is the generated samples. “Rigid” represents the training network doesn’t employ the STN module and attention module, which is divided into two parts from resnet50 directly. “Part-n” is the descriptor of -th branch. “global” means the descriptor is only composed of the features from global branch.
|Descriptor||Test size = 800||Test size = 1600|
|Descriptor||Test size = 2400||Test size = 3200|
Firstly, the difference of “Original-AANet-All” and “Transfer-AANet-All” is only the source images of training sets. Hence, compared with “Original-AANet-All”, the “Transfer-AANet-All” has gains of 2.6%, 1.65% in mAP and rank-1 on VeRi-776, which demonstrates that through the cross-camera transfer network, the bias of different cameras has dropped. Besides that, because our descriptor is learned by multiple branches in the proposed network, we design an ablation experiment analyzing the effectiveness of global, part and fusion feature. “Transfer-AANet-All” is our proposed method that combines all features for reID task. “Transfer-AANet-Part1” and “Transfer-AANet-Part2” denote the features are extracted by the upper branch and lower branch, respectively. As reported in Table 3 and Table 4, it is worth noting that, for each group, the match rates of all independent features are lower than the combination features, such as the “global”, “Transfer-AANet-Part1”‘and , “Transfer-AANet-Part2”. However, the match rate further increases slightly when adding the part features and global features. For instance, on VeRi-776, compared with “global”, “Transfer-AANet-All” improves 11.34% in mAP. It shows that combining with global and part feature can provide more useful information.
To verify the effectiveness of localization model, we remove the STN module and attention module in the AANet and divide the vehicle image into two regions directly as rigid parts. On VeRi-776, compared with “original-Rigid-All”, “original-AANet-all” has gains of 2.34%, 2.68% in mAP and rank-1, respectively. For VehicleID, we also observe improvements of 2.34%, 2.11%, 3.61%, 1.2% in mAP on test set with the size of 800, 1600, 2400 and 3200. All of these show that the proposed AANet could learn more discriminative features for vehicle reID.
4.5 Visualization of Results
Furthermore, to illustrate the validate of the proposed CCA, some experiment results on VehicleID are visualized. Examples are shown in Fig.6. In Fig.6, There are two group results on VehicleID. For each group, the left column shows query images, while images on the right-hand side are the top-5 results obtained by the proposed CCA. Vehicle images with green border are right results while other images are wrong results. For all results, the number on the left-top means Vehicle ID. The same Vehicle ID represents the same vehicle. The Camera ID is the camera number that images are captured. From Fig.6, it is significant that our proposed CCA has high accuracy and good robustness to different viewpoints and illumination.
In this paper, we propose cross-camera adaptation framework for better smoothing the bias between different cameras, which reduces the influence of illumination, background and resolution for vehicle reID task by transferring images into a common space and learning a powerful discriminative feature. Besides that, AANet is designed to obtain a series of local regions for vehicle reID, which focuses on locating the meaningful parts while suppressing background. In this paper, it could be observed that appearances of various viewpoints are totally different, which has a big impact on training the reID model. Hence, in our feature studies, we aim to focus on the extension of dataset that utilizes the generative adversarial network to generate the various viewpoints of vehicle images to improve the performance of reID model.
This work was supported in part by the National Natural Science Foundation of China Grant 61370142 and Grant 61272368, by the Fundamental Research Funds for the Central Universities Grant 3132016352, by the Fundamental Research of Ministry of Transport of P. R. China Grant 2015329225300, by the Dalian Science and Technology Innovation Fund 2018J12GX037 and Dalian Leading talent Grant, by the Foundation of Liaoning Key Research and Development Program, China Postdoctoral Science Foundation 3620080307.
- Group-sensitive triplet embedding for vehicle reidentification. IEEE Transactions on Multimedia 20 (9), pp. 2385–2399. Cited by: §2.
- . In Proceedings of COMPSTAT’2010, pp. 177–186. Cited by: §4.2.
Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797. Cited by: §3.1.
- Multi-task vehicle detection with region-of-interest voting. IEEE Transactions on Image Processing 27 (1), pp. 432–441. Cited by: §1.
- On-road vehicle tracking using part-based particle filter. IEEE Transactions on Intelligent Transportation Systems. Cited by: §1.
Learning coarse-to-fine structured feature embedding for vehicle re-identification.
Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §1.
- Two-level attention network with multi-grain ranking loss for vehicle re-identification. IEEE Transactions on Image Processing. Cited by: §2, §4.3.2, Table 2.
- Part-regularized near-duplicate vehicle re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3997–4005. Cited by: §1.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.2.
- Deep cnns with spatially weighted pooling for fine-grained car recognition. IEEE Transactions on Intelligent Transportation Systems 18 (11), pp. 3147–3156. Cited by: §1.
SINet: a scale-insensitive convolutional neural network for fast vehicle detection. IEEE Transactions on Intelligent Transportation Systems 20 (3), pp. 1010–1019. Cited by: §1.
- A dual path modelwith adaptive attention for vehicle re-identification. arXiv preprint arXiv:1905.03397. Cited by: §1.
- Learning deep context-aware features over body and latent parts for person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 384–393. Cited by: §1.
- Deepreid: deep filter pairing neural network for person re-identification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 152–159. Cited by: §4.2.
- Person re-identification by local maximal occurrence representation and metric learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2197–2206. Cited by: §4.3.1, §4.3.1, §4.3.2, Table 1, Table 2.
- Multi-view learning for vehicle re-identification. In 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 832–837. Cited by: §1.
- Improving person re-identification by attribute and identity learning. Pattern Recognition. Cited by: 3rd item, §4.
- Deep relative distance learning: tell the difference between similar vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2167–2175. Cited by: §2, 2nd item, §4.3.2, Table 2, §4.
- Ram: a region-aware deep model for vehicle re-identification. In 2018 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. Cited by: §2.
- A deep learning-based approach to progressive vehicle re-identification for urban surveillance. In European Conference on Computer Vision, pp. 869–884. Cited by: §4.3.1, §4.3.1, Table 1, Table 2.
- Provid: progressive and multimodal vehicle reidentification for large-scale urban surveillance. IEEE Transactions on Multimedia 20 (3), pp. 645–658. Cited by: §2, 1st item, §4.3.1, §4.3.1, §4.3.2, Table 1, Table 2, §4.
- Embedding adversarial learning for vehicle re-identification. IEEE Transactions on Image Processing. Cited by: §2.
Fine-grained vehicle classification with channel max pooling modified cnns. IEEE Transactions on Vehicular Technology 68 (4), pp. 3224–3233. Cited by: §1.
- Automatic differentiation in pytorch. Cited by: §4.2.
- Learning multi-region features for vehicle re-identification with context-based ranking method. Neurocomputing. Cited by: §2.
- Learning deep neural networks for vehicle re-id with visual-spatio-temporal path proposals. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1900–1909. Cited by: §2, §4.3.1, §4.3.1, Table 1.
- Cityflow: a city-scale benchmark for multi-target multi-camera vehicle tracking and re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8797–8806. Cited by: §1.
- Accelerating t-sne using tree-based algorithms. The Journal of Machine Learning Research 15 (1), pp. 3221–3245. Cited by: §4.2.
- Effective multi-query expansions: collaborative deep networks for robust landmark retrieval. IEEE Transactions on Image Processing 26 (3), pp. 1393–1404. Cited by: §1.
Multiview spectral clustering via structured low-rank matrix factorization. IEEE transactions on neural networks and learning systems 29 (10), pp. 4833–4843. Cited by: §1.
- Orientation invariant feature embedding and spatial temporal regularization for vehicle re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 379–387. Cited by: §2, §4.3.1, §4.3.1, §4.3.2, Table 1, Table 2.
A discriminative feature learning approach for deep face recognition. In European conference on computer vision, pp. 499–515. Cited by: §3.2.
- Cross-entropy adversarial view adaptation for person re-identification. IEEE Transactions on Circuits and Systems for Video Technology. Cited by: §1.
- Where-and-when to look: deep siamese attention networks for video-based person re-identification. IEEE Transactions on Multimedia. Cited by: §1.
- Deep attention-based spatially recursive networks for fine-grained visual recognition. IEEE transactions on cybernetics 49 (5), pp. 1791–1802. Cited by: §1.
- What-and-where to match: deep spatially multiplicative integration networks for person re-identification. Pattern Recognition 76, pp. 727–738. Cited by: §1.
- 3-d personvlad: learning deep global representations for video-based person reidentification. IEEE transactions on neural networks and learning systems. Cited by: §1.
- Cycle-consistent deep generative hashing for cross-modal retrieval. IEEE Transactions on Image Processing 28 (4), pp. 1602–1612. Cited by: §1.
- Few-shot deep adversarial learning for video-based person re-identification. IEEE Transactions on Image Processing 29 (1), pp. 1233–1245. Cited by: §4.
- Learning deep feature representations with domain guided dropout for person re-identification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1249–1258. Cited by: §4.3.1, §4.3.1, §4.3.2, Table 1, Table 2.
- A large-scale car dataset for fine-grained categorization and verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3973–3981. Cited by: §4.3.1, Table 1.
- Learning to navigate for fine-grained classification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 420–435. Cited by: §1.
- Vehicle re-identification for automatic video traffic surveillance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25–31. Cited by: §2.
- Structural analysis of attributes for vehicle re-identification and retrieval. IEEE Transactions on Intelligent Transportation Systems. Cited by: §2.
- A discriminatively learned cnn embedding for person reidentification. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 14 (1), pp. 13. Cited by: §3.2.
- Vehicle re-identification by deep hidden multi-view inference. IEEE Transactions on Image Processing 27 (7), pp. 3275–3287. Cited by: §1, §2.
- Cross-view gan based vehicle generation for re-identification.. In BMVC, Vol. 1, pp. 1–12. Cited by: §2.
- Aware attentive multi-view inference for vehicle re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6489–6498. Cited by: §1, §2, §4.3.1, §4.3.1, §4.3.2, Table 1, Table 2.
- Vehicle re-identification using quadruple directional deep learning features. IEEE Transactions on Intelligent Transportation Systems. Cited by: §1.
- Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §3.1.