Recently, transformers have show strong power on various computer vision tasks such as image classification, object detection, and instance segmentation. The reason of introducing transformers into computer vision lies on its unique properties that convolution neural networks (CNN) lack, especially the property of modeling long-range dependencies in the data. However, dense modeling of long-range dependencies among image tokens across layers of the transformer usually brings computation inefficiency, because images contain large regions of low-level texture and uninformative background.
As shown in above two pathways of Fig. 1, existing methods follow two mainstreams to address the inefficiency problem of modeling long-range dependencies in vision transformers. The first is to perform structural compression based on local spatial prior, such as local linear projection Wang et al. (2021a), convolutional projection Wu et al. (2021); Heo et al. (2021), and shift windows Liu et al. (2021). The second paradigm is non-structural token pruning. Tang et al. (2021)
improves the efficiency of a pre-trained transformer network by developing a top-down layer-by-layer token slimming approach that can identify and remove redundant tokens based on the reconstruction error of the pre-trained network. The final pruning mask is fixed for all instances.Rao et al. (2021) also proposes to accelerate a pre-trained transformer network by removing redundant tokens hierarchically, but explores an unstructured and data-dependent down-sampling strategy.
In this paper, as shown in the third pathway of Fig. 1, we propose to handle the inefficiency problem in a dynamic data-dependent way while suitable for structural compression methods. We denote uninformative tokens that contribute little to the final prediction but bring computational cost when bridging redundant long-range dependencies as Placeholder Tokens. Different from simple structural compression that reduces local spatial redundancy in Wang et al. (2021a); Graham et al. (2021), we propose to unstructurally and dynamically distinguish informative tokens from placeholder tokens for each instance, and update them with different computation priorities. In contrast to searching for redundancy and pruning in a pre-trained network like Tang et al. (2021); Rao et al. (2021), by preserving placeholder tokens, the redundancy problem can be alleviated in the beginning of the training process of a new model, and our method can be a generic plugin in most vision transformers of both flat and deep-narrow structures.
Concretely, Evo-ViT, a self-motivated slow-fast token evolution method for dynamic vision transformer is proposed in this paper. We claim that since transformers have insight into global dependencies among image tokens and learn for classification, it is naturally able to distinguish informative tokens from placeholder tokens for each instance, which is self-motivated. Taking DeiT Touvron et al. (2020) in Fig. 2
as example, we find that the class token of DeiT-T estimates importance of each token for dependency modeling and final classification objection. Especially in deeper layers (e.g., layer 10), it usually augments informative tokens with higher attention scores and has a sparse attention response, which is quite consistent to the visualization result provided by Chefer et al. (2021) for transformer interpretability. In shallow layers (e.g., layer 5), the effect of the class token is relatively scattered but mainly focus on informative regions. Thus, taking advantage of class tokens, informative tokens and placeholer tokens are determined, and the preserved placeholer tokens also ensure complete information flow in shallow layers of a transformer for modeling accuracy. After determining two kinds of tokens, the placeholder tokens are summarized to a representative token that is evolved via the full transformer encoder simultaneously with the informative tokens in a slow and elaborate way. Then, the evolved representative token is exploited to fast update the placeholder tokens.
We evaluated the effectiveness of the proposed Evo-ViT method on two kinds of baseline models, namely transformers of a flat structure such as DeiT Touvron et al. (2020) and transformers of a deep-narrow structure such as LeViT Graham et al. (2021)
on ImageNetDeng et al. (2009) dataset. Our self-motivated slow-fast token evolution method allows the DeiT model to improve computational throughput by 40%-60% while maintaining comparable performance.
Recently, a series of transformer models Han et al. (2020); Khan et al. (2021); Tay et al. (2020b) are proposed to solve various computer vision tasks. Due to it’s significant modeling capabilities of long-range dependencies, transformer has achieved promising success in image classification Dosovitskiy et al. (2020); Touvron et al. (2020); d’Ascoli et al. (2021), object detection Carion et al. (2020); Huang et al. (2021); Liu et al. (2021); Zhu et al. (2020) and segmentation Duke et al. (2021); Zheng et al. (2021).
The pioneering works Dosovitskiy et al. (2020); Touvron et al. (2020); Jiang et al. (2021) directly split an image into patches with fixed length and transform these image patches into tokens as inputs to a flat transformer. Vision Transformer (ViT) Dosovitskiy et al. (2020) is one of such attempts that achieved state-of-the-art performance with large-scale pre-training. DeiT Touvron et al. (2020) manages to tackle the data-inefficiency problem in ViT by simply adjusting training strategies and adding an additional token along with the class token for knowledge distillation. To achieve better accuracy/speed trade-offs for general dense prediction, recent works Yuan et al. (2021); Heo et al. (2021); Graham et al. (2021); Wang et al. (2021a) design transformers of deep-narrow structures by adopting sub-sample operation (e.g.
, strided down sampling, local average pooling, convolutional sampling) to reduce the number of tokens in intermediate layers. These structural sub-sample operations usually help reduce spatial redundancies among neighboring tokens and introduce some locality prior. In this paper, we propose to handle instance-wise unstructured redundancies for both flat and deep-narrow transformers.
Transformer takes high computational cost because Multi-head Self-Attention (MSA) requires quadratic space and time complexity and Feed Forward Network (FFN) increases the dimension of latent features. The existing acceleration methods for transformers can be mainly categorized into sparse attention mechanism (e.g., low rank factorization Xu et al. (2021); Wang et al. (2020), fixed local patterns and learnable patterns Tay et al. (2020a); Beltagy et al. (2020); Liu et al. (2021)), pruning Tang et al. (2021); Rao et al. (2021); Frankle and Carbin (2018); Michel et al. (2019), knowledge distillation Sanh et al. (2019) and so on. For example, Liu et al. (2021) propose a shifted windowing scheme that brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. The Linformer Wang et al. (2020) is a classic example of low rank methods as it projects the length dimension of keys and values to a lower-dimensional representation. Tang et al. (2021) present a top-down layer-by-layer patch slimming algorithm to reduce the computational cost in pre-trained vision transformers. The patch slimming scheme is conducted under a careful control of the feature reconstruction error, so that the pruned transformer network can maintain the original performance with lower computational cost. Rao et al. (2021) devise a lightweight prediction module to estimate the importance score of each token given the current features of a pre-trained transformer. The module is added to different layers to prune redundant tokens unstructurally and is supervised by a distillation loss based on the predictions of the original pre-trained transformer. In this paper, we propose to handle the redundancy problem from the very beginning of the training process of a versatile transformer.
Vision Transformer (ViT) Dosovitskiy et al. (2020) proposes a simple tokenization strategy that handles 2D images by reshaping them into flattened sequential patches and linearly projecting each patch into latent embedding. An extra class token (CLS) is added to the sequence and serves as the image representation. Moreover, since self-attention in the transformer encoder is position-agnostic and vision applications highly need position information, ViT adds position embedding into each token, including the CLS token. Afterwards, all tokens are passed through stacked transformer encoders and finally the CLS token is used for classification.
The transformer is composed of a series of stacked encoders where each encoder consists of two modules, namely a Multi-head Self-Attention (MSA) module and a Feed Forward Network (FFN) module. The FFN module contains two linear transformations with a Gelu activation. A residual connection is employed around both MSA and FFN modules, followed by layer normalization (LN). The input of ViT,, and the processing of the k-th encoder can be expressed as
where and are CLS and patch tokens respectively and is the position embedding. and are the number of patch tokens and the dimension of the embedding.
Specifically, a self-attention (SA) module projects the input sequences into query, key, value vectors (i.e., ) using three learnable linear mapping , and . Then, a weighted sum over all values in the sequence is computed through:
MSA is an extension of SA. It splits queries, keys, and values for times and performs the attention function in parallel, then linearly projects their concatenated outputs.
It is worth noting that one very different design of ViT from CNNs is the CLS token. The CLS token interacts with patch tokens at each encoder and summarizes all the patch tokens for final embedding. We denote the similarity scores between the CLS token and patch tokens as class attention , formulated as:
where is the query vector of the CLS token.
In ViT, the computation costs of the MSA and FFN modules are and , respectively. For pruning methods Rao et al. (2021); Tang et al. (2021), by pruning tokens, FLOPS both in the FFN and MSA modules can be reduced. Our method can achieve same efficiency while preserving the placeholder tokens for scratch training and versatile downstream applications benefiting from the slow-fast token update strategy.
In this paper, we aim to handle the inefficiency modeling issue in each input instance from the very beginning of the training process of a versatile transformer. As shown in Fig 3, the pipeline of Evo-ViT mainly contains two parts: the structure preserving token selection module and the slow-fast token update module. In the structure preserving token selection module, the informative tokens and the placeholder tokens are determined by the evolved global class attention, so that they can be updated in different manners in the following slow-fast token update module. Namely, the placeholder tokens are summarized and updated by a representative token. The long-term dependencies and feature richness of the representative token and the informative tokens are evolved via the MSA and FFN modules.
We first elaborate on the proposed structure preserving token selection module. Then, we will introduce how to update the informative tokens and the placeholder tokens in a slow-fast way. Finally, the training details such as loss and other training strategy are introduced.
Structure preserving token selection
In this paper, we propose to preserve all the tokens and dynamically distinguish informative tokens and placeholder tokens for complete information flow. The reason is that it is not trivial to prune tokens in shallow and middle layers of a vision transformer, especially in the beginning of the training process of the vision transformer. We explain this problem in both inter-layer and intra-layer ways. First, shallow and middle layers usually present fast growing capability of feature representation. Pruning tokens brings severe information loss. Following Refiner Zhou et al. (2021), taking DeiT-T as an example, we use CKA similarity Kornblith et al. (2019) to measure similarity of the intermediate token features output by each encoder and the final CLS token, assuming that the CLS token is strongly correlated with classification. As shown in Fig. 4, the token features of DeiT-T keep evolving fast when the model goes deeper and the final CLS token feature is quite different from token features in shallow layers. It means that the representations in shallow or middle layers are insufficiently encoded and have some diversities, which make token pruning quite difficult. Second, tokens have low correlation with each other in the shallow layers. Following Tang et al. (2021), we also use the average similarity of different patch tokens varies w.r.t. network depth in the DeiT-T model to show redundancies. As shown in Fig. 4, the lower similarities with larger variation in the shallow layers also prove the difficulty to distinguish redundancies in shallow features.
The attention weight is the easiest and most popular approach Abnar and Zuidema (2020); Wang et al. (2021b) to interpret a model’s decisions and to gain insights about the propagation of information among tokens. The class attention weight described in Eqn. 5 reflects the information collection and broadcast based on the router that is the CLS token. We find that our proposed evolving global class attention is able to be a simple measure to help dynamically distinguish informative tokens and placeholder tokens in a vision transformer. In Fig. 4, the distinguished informative tokens have high CKA correlations with the CLS token, while the placeholder tokens have low CKA correlations. As shown in Fig. 2, the global class attention is able to focus on the object tokens, which is consistent to the visualization results of Chefer et al. (2021). In the following part of this section, detailed introduction of our structure preserving token selection method is provided.
As discussed in Preliminaries Section, the class attention is calculated by Eqn. 5. We select tokens whose scores in the class attention are among the top as the informative tokens. The remaining tokens are recognized as placeholder tokens that contain less information. Note that the placeholder tokens are kept and fast-updated rather than directly dropped.
For better capability of capturing the underlying information among tokens in different layers, we propose a global class attention that augments the class attention by evolving it across layers as shown in Fig. 3. Specifically, a residual connections between class attentions are designed to facilitate the attention information flow with some regularization effects. Mathematically,
where is the global class attention in the k-th layer, and is the class attention in the k-th layer. We use for the token selection in the (k+1)-th layer for stability and efficiency.
Slow-fast token update
Once the informative tokens and the placeholder tokens are determined by the global class attention, instead of harshly dropping placeholder tokens as Tang et al. (2021); Rao et al. (2021), we propose to update tokens in a slow-fast way. As shown in Fig. 3, informative tokens are carefully evolved via MSA and FFN modules, while placeholder tokens are coarsely summarized and updated via a representative token. We introduce our slow-fast token updating strategy mathematically as follows.
For patch tokens , we first split them into informative tokens and placeholder tokens by token selection strategy introduced above. Secondly, the placeholder tokens are aggregated into a representative token , as:
where denotes an aggregating function such as weighted pooling or transposed linear projection. Here we use weighted pooling based on the corresponding global attention score in Eqn. 6.
Then, both the informative tokens and the representative token are fed into MSA and FFN modules, and their residuals are recorded as and for skip connections, which can be denoted by:
Thus, the informative tokens and the representative token are updated in a slow and elaborate way.
Finally, the placeholder tokens are updated in a fast way by the residuals of :
where denotes an expanding function such as a simple copy in our method.
Layer-to-stage training schedule.
Our proposed token selection mechanism becomes increasingly stable and consistent as the training process of a transformer progresses. Fig. 6
shows that the indexes of selected informative tokens in different layers of the same stage of a transformer gradually turn to be similar during the training process. The transformer tends to augment representations of meaningful informative tokens. Thus, we propose a layer-to-stage training strategy for further consistency and efficiency. Specifically, we conduct the token selection and slow-fast token update layer by layer at the first 200 epochs. During the remaining 100 epochs, we only conduct token selection at the beginning of each stage, and then according to the determined informative tokens and placeholder tokens, slow-fast update is normally performed towards the end of the stage. For transformers with flat structure such as DeiTTouvron et al. (2020), we manually arrange 4 layers as one stage.
Assisted CLS token loss.
Although many state-of-the-art vision transformers Wang et al. (2021a); Graham et al. (2021) remove the CLS token and use the final average pooled features for classification, it is not difficult to add a CLS token in their models for our token selection strategy. We empirically find that the ability of distinguishing two kinds of tokens of the CLS token as illustrated in Fig. 2 is kept in these models even without supervision on the CLS token. For better stability, we calculate classification losses based on the CLS token together with the final average pooled features during training. Mathematically,
where and denote the CLS token and patch tokens, respectively and is their corresponding ground-truth. denotes the transformer model. is the classification metric function usually realized by the cross-entropy loss. During inference, the final average pooled features are used for classification and the CLS token is only used for token selection.
In this section, we demonstrate the superiority of the proposed Evo-ViT method through extensive experiments on the ImageNet-1kDeng et al. (2009) classification dataset. To demonstrate the generalization of our method, we conduct experiments on vision transformers of both flat and deep-narrow structures, i.e., DeiT Touvron et al. (2020) and LeViT Graham et al. (2021). For overall comparisons with the state-of-the-arts (SOTA) methods Rao et al. (2021); Tang et al. (2021); Chen et al. (2021); Pan et al. (2021), we conduct the token selection and slow-fast token update from the fifth layer of DeiT and the third layer (excluding the convolution layers) of LeViT, respectively. The selection ratio of informative tokens in all selected layers of both DeiT and LeViT are set to 0.5. The global CLS attention trade-off in Eqn. 6 are set to 0.5 for all layers. For fair comparisons, all the models are trained for 300 epochs.
Acceleration comparisons with state-of-the-art pruning methods.
In Table 1, we compare our method with existing token pruning methods Rao et al. (2021); Pan et al. (2021); Tang et al. (2021); Chen et al. (2021). Since token pruning methods can not recover the 2D structure and are usually designed for flat structured transformers, we comprehensively conduct the comparisons based on DeiT Touvron et al. (2020) on ImageNet dataset. We report the top-1 accuracy and throughput for performance evaluation. The throughput is measured on a single NVIDIA V100 GPU with batch size fixed to 256, which is same with the setting of DeiT. Results indicate that our method outperforms previous token pruning method on both accuracy and efficiency. Our method accelerate the inference at runtime by over 60 with negligible accuracy drop (-0.4) on DeiT-S.
Comparisons with state-of-the-art transformer models.
Thanks to placeholder tokens, our method can preserve the spatial structure that is indispensable for most existing modern vision transformer architectures. Thus, we further apply our method to state-of-the-art efficient transformer LeViT Graham et al. (2021), which presents a deep-narrow architecture. As shown in Table 2, our method can further accelerate the deep-narrow transformer like LeViT besides good performance on DeiT.
|Baseline Touvron et al. (2020)||72.2||2536||-|
|PS-ViT Tang et al. (2021)||72.0||3563||40.5|
|DynamicViT Rao et al. (2021)||71.2||3890||53.4|
|SViTE Chen et al. (2021)||70.1||2836||11.8|
|Baseline Touvron et al. (2020)||79.8||940||-|
|PS-ViT Tang et al. (2021)||79.4||1308||43.6|
|SViTE Chen et al. (2021)||79.2||1215||29.3|
|DynamicViT Rao et al. (2021)||79.3||1479||57.3|
|IA-RED Pan et al. (2021)||79.1||1360||44.7|
Effectiveness of each module.
To evaluate the effectiveness of each sub-method, we add improvements step by step in Tab. 3 on both flat structure DeiT and deep-narrow structure LeViT. The improvements include:
Naive selection. To directly prune the uninformative tokens.
Placeholder token. To preserve the uninformative tokens but not fast update them.
Global attention. To utilize the proposed evolved global class attention instead of vanilla class attention for token selection.
Fast updating. To augment the placeholder tokens with fast updating.
Layer to stage. To apply the proposed layer-to-stage training strategy to further accelerate inference.
Results on DeiT shows that our placeholder token strategy can further improve the selection performance due to its capacity of preserving complete information flow. The global attention strategy enhances the consistency of token selection in each layer and achieves better performance. Fast updating strategy makes less effect on DeiT than on LeViT. We claim that the performance of DeiT turns to be saturated based on placeholder tokens and global attention while LeViT still has some space for improvement. LeViT exploits spatial pooling for token reduction, which makes unstructured token reduction in each stage more difficult. By using the fast updating strategy, it is possible to collect some extra cues from placeholder tokens for accuracy and augment feature representations. We also evaluate the layer to stage training strategy. Results indicate that it maintains the accuracy while further accelerating inference.
Hyper parameter analysis.
We further investigate the hyper parameters of our method on DeiT-T, namely, keeping ratio, starting layer index, global attention trade-off, and starting epoch. We initialize these hyper parameters as described in Setup. During ablation analysis, only the object parameter is changed and the others remain fixed.
Keeping ratio denotes how many tokens are kept for slow update in each layer. For precision, we set all layers with the same keeping ratio and investigate the trade-off between accuracy and inference throughput in Fig. 5. Results show that the accuracy turns to be saturated when the keeping ratio is larger than 0.5. Another interesting finding is that the accelerated model with 0.9 keeping ratio outperforms the full baseline by 0.2 (72.2 to 72.4), which is consistent with the conclusion in Chen et al. (2021) that properly dropping several uninformative tokens can serve as regularization for vision transformers.
|Top-1 Acc.||Throughput||Top-1 Acc.||Throughput|
|+ naive selection||70.8||3824||-||-|
|+ placeholder token||71.6||3802||72.1||9892|
|+ global attention||72.0||3730||72.5||9452|
|+ fast updating||72.0||3610||73.2||9360|
|+ layer to stage||72.0||3978||73.2||10008|
Starting layer index denotes which layer to start token selection and slow-fast updating. Fig. 5 indicates that the accuracy turns to be stable when we start from the fifth layer. We find that the accuracy drop greatly as the starting layer become shallow, especially for the first three layers. We claim the reason lies that the features in these layers are still with large variation and not stable as shown in Fig. 4 and Fig. 4.
Global attention tradeoff in Eqn. 6 controls the dependence on previous layer information when conduct token selection in each layer. Larger trade-off means stronger dependence on previous information. It is illustrated in Fig. 5 that it is best to equally consider the previous and current information.
Starting epoch denotes which epoch to start our token selection and evolution strategy. As shown in Fig. 5, the accuracy drop sharply when we start from the last 100 epoch. We claim that the dynamic token selection requires enough training epochs to learn the refined features. For precision and training efficiency, we start our token selection and evolution strategy from the very beginning of training.
Different Token Selection Strategy.
|Method||Top-1 Acc. (%)||Throughput (img/s)|
|global class attention||70.8||3750|
We further compare the global attention token selection strategy with some common subsampling methods in Tab. 4 to evaluate the effectiveness of our token selection metric. For fair comparisons, we directly drop the unselected tokens instead of keeping as placeholder tokens. We align the throughput by setting different subsample location in the network for each method and compare their accuracy. Tab. 4 shows that our global class attention metric outperform the common subsampling method on both accuracy and throughput.
We further visualize the token selection in Fig. 6 to demonstrate performance of our method during both training and inference stages. The left three columns demonstrate results on different layers of a well-trained DeiT-T model. Results show that our token selection method mainly focuses on objects instead of backgrounds, which means that our method can effectively discriminate the informative tokens from placeholder tokens. The selection results tend to be consistent across layers, which proves the feasibility of our layer-to-stage training strategy. Another interesting finding is that some missed tokens in the shallow layers are retrieved in the deep layers thanks to our structure preserving strategy. Take the baseball images in the forth row as example, tokens of the bat are gradually picked up as the layer goes deeper. We also investigate how the token selection evolves during the training stage in the right three columns. Results demonstrate that some informative tokens such as the fish tail are determined as placeholder tokens at the early epochs. With more training epochs, the ability of redundancy recognition of our method gradually increases and finally turns to be stable for discriminative token selection.
In this work, we investigate the efficiency of vision transformers by developing a self-motivated slow-fast token evolution (Evo-ViT) method, which can be conducted from the very beginning of the training process of a vision transformer. For each instance, the informative tokens and placeholder tokens are determined by the evolved global class attention of the transformer. By preserving placeholder tokens and updating them coarsely based on a representative token, both the complete information flow and 2D spatial structure are preserved for training stability and generalization to transformers of flat and deep-narrow structures. Meanwhile, the informative tokens and the representative token are carefully evolved via MSA and FFN modules of a vanilla transformer encoder. Experiments on DeiT indicate that the proposed Evo-ViT method accelerates the model by 40-60 while maintaining comparable classification performance. Interesting future directions include extending our method to downstream tasks such as detection and segmentation.
- Quantifying attention flow in transformers. arXiv preprint arXiv:2005.00928. Cited by: Structure preserving token selection.
- Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150. Cited by: Redundancy Reduction.
- End-to-end object detection with transformers. In European Conference on Computer Vision, pp. 213–229. Cited by: Vision Transformer.
Transformer interpretability beyond attention visualization.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Cited by: Figure 2, Introduction, Structure preserving token selection.
- Chasing sparsity in vision transformers: an end-to-end exploration. arXiv preprint arXiv:2106.04533. Cited by: Setup, Acceleration comparisons with state-of-the-art pruning methods., Hyper parameter analysis., Table 1.
- Convit: improving vision transformers with soft convolutional inductive biases. arXiv preprint arXiv:2103.10697. Cited by: Vision Transformer.
- Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: Introduction, Setup.
- An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Cited by: Vision Transformer, Vision Transformer, Preliminaries.
- Sstvos: sparse spatiotemporal transformers for video object segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5912–5921. Cited by: Vision Transformer.
The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635. Cited by: Redundancy Reduction.
- LeViT: a vision transformer in convnet’s clothing for faster inference. arXiv preprint arXiv:2104.01136. Cited by: Figure 1, Introduction, Introduction, Vision Transformer, Assisted CLS token loss., Setup, Comparisons with state-of-the-art transformer models..
- A survey on visual transformer. arXiv preprint arXiv:2012.12556. Cited by: Vision Transformer.
- Rethinking spatial dimensions of vision transformers. arXiv preprint arXiv:2103.16302. Cited by: Figure 1, Introduction, Vision Transformer.
- Shuffle transformer: rethinking spatial shuffle for vision transformer. arXiv preprint arXiv:2106.03650. Cited by: Vision Transformer.
- Token labeling: training a 85.5% top-1 accuracy vision transformer with 56m parameters on imagenet. arXiv preprint arXiv:2104.10858. Cited by: Vision Transformer.
- Transformers in vision: a survey. arXiv preprint arXiv:2101.01169. Cited by: Vision Transformer.
Similarity of neural network representations revisited.
International Conference on Machine Learning, pp. 3519–3529. Cited by: Structure preserving token selection.
- Swin transformer: hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030. Cited by: Introduction, Vision Transformer, Redundancy Reduction.
- Are sixteen heads really better than one?. arXiv preprint arXiv:1905.10650. Cited by: Redundancy Reduction.
- IA-red : interpretability-aware redundancy reduction for vision transformers. arXiv preprint arXiv:2106.12620. Cited by: Setup, Acceleration comparisons with state-of-the-art pruning methods., Table 1.
- DynamicViT: efficient vision transformers with dynamic token sparsification. arXiv preprint arXiv:2106.02034. Cited by: Figure 1, Introduction, Introduction, Redundancy Reduction, Computational complexity., Slow-fast token update, Setup, Acceleration comparisons with state-of-the-art pruning methods., Table 1.
- DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Cited by: Redundancy Reduction.
- Patch slimming for efficient vision transformers. arXiv preprint arXiv:2106.02852. Cited by: Figure 1, Introduction, Introduction, Redundancy Reduction, Computational complexity., Structure preserving token selection, Slow-fast token update, Setup, Acceleration comparisons with state-of-the-art pruning methods., Table 1.
- Synthesizer: rethinking self-attention in transformer models. arXiv preprint arXiv:2005.00743. Cited by: Redundancy Reduction.
- Efficient transformers: a survey. arXiv preprint arXiv:2009.06732. Cited by: Vision Transformer.
- Training data-efficient image transformers & distillation through attention. arXiv preprint arXiv:2012.12877. Cited by: Introduction, Introduction, Vision Transformer, Vision Transformer, Layer-to-stage training schedule., Setup, Acceleration comparisons with state-of-the-art pruning methods., Table 1.
- Linformer: self-attention with linear complexity. arXiv preprint arXiv:2006.04768. Cited by: Redundancy Reduction.
- Pyramid vision transformer: a versatile backbone for dense prediction without convolutions. arXiv preprint arXiv:2102.12122. Cited by: Figure 1, Introduction, Introduction, Vision Transformer, Assisted CLS token loss..
- Evolving attention with residual convolutions. arXiv preprint arXiv:2102.12895. Cited by: Structure preserving token selection.
- Cvt: introducing convolutions to vision transformers. arXiv preprint arXiv:2103.15808. Cited by: Introduction.
- Co-scale conv-attentional image transformers. arXiv preprint arXiv:2104.06399. Cited by: Redundancy Reduction.
- Tokens-to-token vit: training vision transformers from scratch on imagenet. arXiv preprint arXiv:2101.11986. Cited by: Vision Transformer.
- Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6881–6890. Cited by: Vision Transformer.
- Refiner: refining self-attention for vision transformers. arXiv preprint arXiv:2106.03714. Cited by: Structure preserving token selection.
- Deformable detr: deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159. Cited by: Vision Transformer.