Person Re-Identification is an important problem in computer vision-based surveillance applications, in which the same person is attempted to be identified from surveillance photographs in a variety of nearby zones. At present, the majority of Person re-ID techniques are based on Convolutional Neural Networks (CNNs), but Vision Transformers are beginning to displace pure CNNs for a variety of object recognition tasks. The primary output of a vision transformer is a global classification token, but vision transformers also yield local tokens which contain additional information about local regions of the image. Techniques to make use of these local tokens to improve classification accuracy are an active area of research. We propose a novel Locally Aware Transformer (LA-Transformer) that employs a Parts-based Convolution Baseline (PCB)-inspired strategy for aggregating globally enhanced local classification tokens into an ensemble of √(N) classifiers, where N is the number of patches. An additional novelty is that we incorporate blockwise fine-tuning which further improves re-ID accuracy. LA-Transformer with blockwise fine-tuning achieves rank-1 accuracy of 98.27 % with standard deviation of 0.13 on the Market-1501 and 98.7% with standard deviation of 0.2 on the CUHK03 dataset respectively, outperforming all other state-of-the-art published methods at the time of writing.READ FULL TEXT VIEW PDF
In recent years, Person Re-Identification(re-ID) has gained a lot of attention due to its foundational role in computer vision based video surveillance applications. Person re-ID is predominantly considered as a feature embedding problem. Given a query image and a large set of gallery images, person re-ID generates the feature embedding of each image and then ranks the similarity between query and gallery image vectors. This can be used to re-identify the person in photographs obtained by nearby surveillance cameras.
Recently, Vision Transformer (ViT) as introduced by ViT is gaining substantial traction for image recognition problems. While some methods for image classification (ViT; Deit)
, and for image retrieval(VitImageRetrieval) are focused only on the classification token, some approaches utilize the fact that local tokens, which are also outputs of the transformer encoder, can be used to improve performance of many computer vision applications including image segmentation (visualTransformer; pyramidVT; transunet), object detection (transformerbased_obj_detection; pyramidVT) and even person re-ID (TransReID). Nevertheless, at present, approaches to make use of local and global tokens are an active area of research.
In the words of transformerbased_obj_detection, "The remaining tokens in the sequence are used only as features for the final class token to attend to. However, these unused outputs correspond to the input patches, and in theory, could encode local information useful for performing object detection".transformerbased_obj_detection observed that the local tokens, although theoretically influenced by global information, also have substantial correspondence to the original input patches. One might therefore consider the possibility of using these local tokens as an enhanced feature representation of the original image patches to more strongly couple vision transformer encoders to fully connected (FC) classification techniques. This coupling of local patches with FC classification techniques is the primary intuition behind the LA-Transformer architectural design.
Part-based Convolutional Baseline (PCB) (PCB) is a strong convolutional baseline technique for person re-ID and has inspired many state-of-the-art models (STReid; beyond_human_parts; pyramid). PCB partitions the feature vector received from the backbone network into six vertical regions and constructs an ensemble of regional classifiers with a voting strategy to determine the predicted class label. A limitation of PCB is that each regional classifier ignores the global information which is also very important for recognition and identification. Nevertheless, PCB has achieved much success despite this limitation, and as such the design of LA-Transformer uses a PCB-like strategy to combine globally enhanced local tokens.
Our work also improves on the recent results of TransReID, who was the first to employ Vision Transformers to person re-ID and achieved results comparable to the current state-of-the-art CNN based models. Our approach extends TransReID in several ways but primarily because we aggregate the globally enhanced local tokens using a PCB-like strategy that takes advantage of the spatial locality of these tokens. Although TransReID makes use of fine-grained local tokens, it does so with a ShuffleNet (shufflenet) like Jigsaw shuffling step which does not take advantage of the 2D spatial locality information inherent in the ordering of the local tokens. LA-Transformer overcomes this limitation by using a PCB-like strategy to combine the globally enhanced local tokens while first preserving their ordering in correspondence with the image dimension.
An additional novelty of our approach is the use of blockwise fine-tuning which we find is able to further improve the classification accuracy of LA-Transformer for person re-ID. Blockwise fine-tuning is viable as a form of regularization when training models with a large number of parameters over relatively small in-domain datasets. ULMFit advocate for blockwise fine-tuning or gradual unfreezing
particularly when training language models due to a large number of fully connected layers. As vision transformers also have high connectivity, we find that this approach is able to further improve the classification accuracy for LA-Transformer.
This paper is organized as follows: Firstly, we discuss related work involving Transformer architectures and other related methodologies in person re-ID. Secondly, we describe the architecture of LA-Transformer, including the novel locally aware network and blockwise fine-tuning techniques. Finally, we present quantitative results of the person re-ID including mAP and rank-1 analysis on the market-1501 and CUHK03 datasets.
For many years CNN based models have dominated image recognition tasks including person re-ID. A vast body of research has been performed to determine the best strategy to extract features using CNNs to address issues like appearance ambiguity, background perturbance, partial occlusion, body misalignment, viewpoint changes, and pose variations, etc. posesensitive proposed a Pose-Sensitive Embedding to incorporate information associated with poses of a person in the model, conditionalEmbedding used a Graph Convolution Network (GCN) to generate a conditional feature vector based on the local correlation between image pairs, lightweight used global channel-based and part-based features, alignedreid used global pooling to extract global features and horizontal pooling followed by CNN for local features. CNN based methods have led to many advances in recent years and are continuing to be developed for person re-ID.
Another branch of techniques for person re-ID focuses on the development of highly engineered network designs that incorporate additional domain knowledge to improve re-ID performance. PartAware used a part-aware approach for which the model performs the main task as well as auxiliary tasks for each body part. ViewpointVehicleReID and Viewpoint use viewing angles as additional features. PartLoss introduced the idea of calculating part loss and PCB (Part-based Convolutional Backbone a.k.a. PCB) improved on it. Even current top-performing models like STReid used PCB along with domain-specific Spatio-temporal distribution information to achieve good results on the Market-1501 dataset. In our work we incorporate PCB-like local classifiers with Vision Transformers, and furthermore we find that our model performs better if we pass global information along with local features. LA-Transformer achieves results with comparable and slightly higher rank-1 accuracy than the reported results of STReid over Market-1501 and does so without the use of additional Spatio-temporal information.
Interest in Vision Transformers grew initially from attention mechanisms which were first employed for language translation problems in NLP (firstAttention), Attention mechanisms have been employed to great effect in image recognition. spatialattention introduced parameter-free spatial attention to integrating spatial relations to Global Average Pooling (GAP). SAM_CAM used Spatial Attention Module (SAM), and Channel Attention Module (CAM) to deliver prominent spatial and channel information. abdnet propose Position Attention Module (PAM) for semantically related pixels in the spatial domain along with CAM. Attention mechanisms continue to be an active area of research for many problems related to object detection and recognition.
Transformers were first introduced in NLP problems by Transformer
, and now Transformers are contributing to many new developments in machine learning.ViT
introduced transformers to images by treating a 16x16 patch as a word and treating image classification as analogous to text classification. This approach showed promising results on ImageNet and it was soon adopted in many image classification problems(image_tranformer; nonlocal). Object detection is another highly related problem for which vision transformers have been recently applied (DETR; transformerbased_obj_detection). transformerbased_obj_detection described a correspondence between local tokens and input patches and combined local tokens to create spatial feature maps. At present, this observation of the correspondence between local tokens and input patches has yet to be applied to a wide variety of computer vision problems, nor has it been previously explored in the context of person re-ID. One exception is in the area of image segmentation, for which recent works are beginning to take advantage of the 2D ordering of the local tokens in order to produce more accurate predicted masks (visualTransformer; pyramidVT; transunet). Our approach builds upon the recent work of TransReID who was the first to apply vision transformers to object and person re-ID. Although the approach of TransReID makes use of global and local tokens, TransReID combines the local tokens using a jigsaw classification branch which shuffles the ordering of the local features. Shuffling the order of local features does not take advantage of the observation of transformerbased_obj_detection in that local features correspond strongly with input patches and therefore have a natural ordering in the form of a 2D image grid. Conversely, LA-Transformer takes advantage of the spatial locality of these local features by combining globally enhanced local tokens with a PCB-like strategy (PCB). Furthermore, LA-Transformer incorporates the blockwise fine-tuning strategy as described by ULMFit as a form of regularization for high-connectivity pre-trained language models. As such LA-Transformer builds upon recent advances in the application of vision transformers in tandem with novel training techniques to achieve state of the art accuracy in person re-ID.
LA-Transformer combines vision transformers with an ensemble of FC classifiers that take advantage of the 2D spatial locality of the globally enhanced local tokens. Section 3.1 describes the overall architecture including the backbone vision transformer (section 3.1.1), as well as the PCB inspired classifier network ensemble (section 3.1.2). The blockwise fine-tuning strategy is described in section 3.2. As such, these sections describe the major elements of the LA-Transformer methodology.
LA-Transformer (figure 1) consists of two main parts: a backbone network and a locally aware network. Both components are interconnected and trained as a single neural network model.
The backbone network is the ViT architecture as proposed by ViT. ViT generates tokens . The token , also known as the global classification token and we refer to this token as the global token . Supplementary outputs are referred to as local tokens which we denote to collectively as . Globally Enhanced Local Tokens (GELT) are obtained by combining global tokens and local tokens ( and ) using weighted averaging and are arranged into a 2D spatial grid as seen in Figure 1(a). The row-wise averaged GELTs are then fed to the locally aware classification ensemble as seen in Figure 1(b) to classify during the training process and to generate feature embedding (by concatenating L) during the testing process. These steps are described in greater detail in the following sections 3.1.1 and 3.1.2
The backbone network of LA-Transformer is the ViT vision transformer (ViT). ViT requires extensive training data on the order of images to train effectively, but the Market1501 and CUHK-03 datasets are relatively small (Table 1) in comparison on the order of ’s of thousands of images. As such we employed a pre-trained ViT model, and further made use of blockwise fine-tuning to improve accuracy as described in section 3.2
The backbone ViT architecture takes images of size as input, and as such the Market1501 and CUHK-03 images are re-sampled to this resolution during training. First, the image is converted into number of patches . Each patch is then linearly projected into dimensions using the patch embedding function () (eq. 2), which is obtained using a convolution layer with a kernel size of
. For non-overlapping patches, a stride equal tois used. is the number of channels and is set to which represents the size of the embedding. The total number of patches
depends on kernel size, stride, padding, and size of the image.can be easily calculated using the eq. 1. Assuming padding is 0, and , are height and width of an image, , are height and width of the kernel and is kernel stride.
Afterward, the learnable class embedding is prepended with the patch embedding ()) whose output state keeps the information of the entire image and serves as the global vector. The resulting vectors are then added with position embeddings to preserve the positional information. Subsequently, the final sequence of vectors (eq. 2) is fed into the transformer encoder (figure1) to generate feature vectors where is the number of patches plus class embedding.
The transformer encoder consist of total blocks. Each block contains alternating MSA (Multiheaded Self-Attention) introduced by Transformer
and MLP blocks. The Layernorm (LN) is applied before MSA and MLP blocks and a residual connections is applied after each encoder block. The output of transformer encoderdescribed in eq. 5 passes through all the blocks (eq. 3 and 4 ).
While the seminal work of ViT only uses classification token for classification, LA-Transformer makes use of all of the features eq. 5. Though the class embedding can be removed from the backbone network, our experiments show promising results with class embedding serving as a global vector (Table 2). From our experiments, it is clear that ViT as a backbone network is a good choice for person re-ID based problems. Further, we believe that any transformer based model like Diet by Deit, or DeepViT by DeepViT can be used as a backbone network.
The Locally Aware Network is a classifier ensemble similar to the PCB technique of PCB but with some differences. Firstly, in PCB the input features are purely local, whereas in LA-Transformer, we find that the inclusion of global vectors along with local vectors via weighted averaging can increase the network accuracy. Secondly, although in PCB the image is divided into six input regions, we divide the 2D spatial grid of tokens into regions as seen in Figure 1. Finally, while PCB uses a convolutional backbone, LA-Transformer uses the ViT backbone.
In Figure 1, the transformer encoder outputs feature vectors. The global tokens and local tokens are obtained for which is number of patches. is defined as the total number of patches per row and as the total number of patches per column. In our case, . Then we define as the averaged GELT obtained after average pooling of and as follows,
In eq. 6 all the patches in a row are averaged to create one local vector per row . The total number of FC classifiers is equal toas the output of LA-Transformer as follows,
The outputs are passed through softmax and the softmax scores are summed together. The argument of the maximum score represents the ID of the person as follows.
According to the recent studies of Deit and ViT
, training a vision transformer from scratch requires about 14M-300M images. Person re-ID datasets are known for their small size and training a transformer on these datasets can quickly lead to overfitting. As such, ViT was pre-trained on ImageNet (imagenet21k), and then fine-tuned on person re-ID datasets. Blockwise fine-tuning was applied which is highly similar to the gradual unfreezing method described by ULMFit for the purposes of training large language models in the event of limited training data from a target domain.
In blockwise fine-tuning, all transformer blocks are frozen in the start except for the bottleneck model. After every epochs (where is a hyper-parameter), one additional transformer encoder block is unfrozen and the learning rate is reduced as described by Alg1. Blockwise fine-tuning helps in mitigating the risk of catastrophic forgetting of the pre-trained weights (ULMFit). The learning rate decay helps in reducing the gradient flow in the subsequent layers hence prevent abrupt weight updates.
LA Transformer is trained over two benchmark datasets; Market-1501 and CUHK-03. Table 1 gives the overview of datasets used to train the model. The Market-1501 dataset (Market1501) contains total 1501 classes/identities captured by six different cameras. Out of 1501 classes, the train set contains 750 classes, and the test set consists of 751 classes. A total of 12,192 images are present in the train set. The test set is divided into a query set of 3,368 images and a gallery set of 19744 images. CUHK-03 dataset (cuhk03) contains a total of 1,367 classes captured by six cameras. There are 13,131 images in the train set and 1,930 images in the test set (965 in query and 965 in the gallery set).
By convention, re-ID is evaluated over two standard evaluation metrics; Cumulative Matching Characteristics (CMC) and Mean Average Precision (mAP). We apply these metrics to assess the performance of the LA-Transformer and other experiments.
ViT was pre-trained on ImageNet-21K and used as a backbone network as well as a baseline model (ViT; imagenet21k). All the images are resized into as this resolution is compatible with the backbone network. The model is trained over 30 epochs with a batch size of 32. We used the Adam optimizer with an initial learning rate of , step decay of , and . For testing, we concatenated all of the averaged GELTs to generate the feature embedding. To efficiently calculate the Euclidean norm between the query and gallery vectors, we use the FAISS library FAISS. All the models are trained and tested on a single GPU machine with an Nvidia RTX2080 Ti with 11 GB VRAM, and 64 GB RAM.
|Without BW-FT||With BW-FT|
The table 2 compares the performance of variations of LA-Transformer versus the same variations of baseline ViT using the Market-1501 dataset. All six experiments are performed with and without blockwise fine-tuning. Experiment 1 is the baseline model that uses only the global token to generate feature embedding. Experiment 2 uses only the local tokens of the transformer encoder and exhibits the lowest rank-1 accuracy () and mAP score () out of all of the variations of ViT. Experiment 3 combines the the first and second experiments by utilizing the globally enhanced local tokens. The impact of global and local features is also compared using LA-Transformer via three variations: global, local, and globally enhanced local tokens. All of the experiments with LA-Transformer perform better than the baseline ViT and its variations. LA-Transformer increases the rank-1 accuracy by and mAP score by on an average versus ViT over the experiments in table 2. Similar to ViT with local features, LA-Transformer with only local features achieves the lowest accuracy of the LA-Transformer only experiments. Therefore, we conjecture that using only local vectors to predict the output and generate the final embedding is not sufficient. Nevertheless, using globally enhanced local tokens outperforms the local only results by rank-1 and mAP and improves over the global only results by rank-1 and mAP. Therefore LA-Transformer using globally enhanced local tokens achieves the highest rank-1 and mAP scores of all technique feature embedding designs in this comparison.
Blockwise fine-tuning achieves higher rank-1 and mAP scores in all experiments as compared against similar experiments without blockwise fine-tuning over the Market-1501 dataset. As seen in Table 2, blockwise fine-tuning increases the rank-1 accuracy by and mAP score by
on average across all of the experiments in this Ablation study. During blockwise fine-tuning, the hyperparameteris set to 2 which means, after every 2 epochs one additional block is unfrozen. The baseline ViT model has 12 blocks. Therefore, it takes 22 epochs to unfreeze and train on all the layers. However, for most models, we found the best validation score is reached before the 22nd epoch, but rather after the 18th epoch yielding 10 trainable blocks during fine tuning. Figure 2 shows the comparison of validation results for LA-Transformer trained with and without blockwise fine-tuning. It can be clearly seen that blockwise fine-tuning leads to faster convergence and better results than the training model without blockwise fine-tuning.
|IANet||94.4||83.1||DG-Net 61.1 65.6|
To evaluate the performance of LA-Transformer, it is trained and evaluated five times on Market-1501 and CUHK03 and the mean results are reported in Table 3. On Market-1501, the rank-1 accuracy of LA-Transformer is with standard deviation of with blockwise fine-tuning and with standard deviation of without blockwise fine-tuning. On CUHK03, the rank-1 accuracy of LA-Transformer is with a standard deviation of with blockwise fine-tuning.
Table 3 compares LA Transformer with the state-of-the-art (SOTA) models on two benchmarks of person re-ID; Market-1501 and CUHK-03. On the Market-1501 dataset, LA-transformer achieves the highest reported rank-1 accuracy of all models in this comparison, and outperforms the rank-1 accuracy of the next highest SOTA model by . On Market-1501, the mAP score lies among the top five SOTA models. In case of CUHK-03, LA-Transformer achieves both the highest rank-1 accuracy as well as the highest mAP score, and outperforms the next highest SOTA models score by (rank-1) and (mAP) respectively.
We present a novel technique for person re-ID called Locally Aware Transformer (LA-Transformer) which achieves state of the art performance on the Market-1501 and CHUK-03 datasets. This approach makes two contributions toward solving the person re-ID problem. First, we show that the global token and local token outputs of vision transformers can be combined with a PCB-like strategy to improve re-ID accuracy. Secondly, we incorporate blockwise fine-tuning to regularize the fine tuning of a pre-trained vision transformer backbone network. We believe that vision transformers will continue to have a major positive impact in the field of computer vision, and we are hopeful that the architectural design of LA-transformer will lead to further innovation and the development of new and novel techniques to advance our understanding of person re-ID.