CoaT: Co-Scale Conv-Attentional Image Transformers
In this paper, we present Co-scale conv-attentional image Transformers (CoaT), a Transformer-based image classifier equipped with co-scale and conv-attentional mechanisms. First, the co-scale mechanism maintains the integrity of Transformers' encoder branches at individual scales, while allowing representations learned at different scales to effectively communicate with each other; we design a series of serial and parallel blocks to realize the co-scale attention mechanism. Second, we devise a conv-attentional mechanism by realizing a relative position embedding formulation in the factorized attention module with an efficient convolution-like implementation. CoaT empowers image Transformers with enriched multi-scale and contextual modeling capabilities. On ImageNet, relatively small CoaT models attain superior classification results compared with the similar-sized convolutional neural networks and image/vision Transformers. The effectiveness of CoaT's backbone is also illustrated on object detection and instance segmentation, demonstrating its applicability to the downstream computer vision tasks.READ FULL TEXT VIEW PDF
Attention mechanism of late has been quite popular in the computer visio...
Transformers with remarkable global representation capacities achieve
Transformers have become one of the dominant architectures in deep learn...
We present BoTNet, a conceptually simple yet powerful backbone architect...
Recently, plenty of work has tried to introduce transformers into comput...
Large-scale pretraining of visual representations has led to state-of-th...
Convolutional neural networks (CNNs) are ubiquitous in computer vision, ...
CoaT: Co-Scale Conv-Attentional Image Transformers
A notable recent development in artificial intelligence is the creation of attention mechanisms and Transformers 
, which have made a profound impact in a range of fields including natural language processing[6, 23], document analysis , speech recognition , and computer vision [8, 2]. In the past, state-of-the-art image classifiers have been built primarily on convolutional neural networks (CNNs) [16, 15, 30, 29, 10, 42] that operate on layers of filtering processes. Recent developments [34, 8] however begin to show encouraging results for Transformer-based image classifiers.
In essence, both the convolution  and attention  operations address the fundamental representation problem for structured data (e.g. images and text) by modeling the local contents, as well as the contexts. The receptive fields in CNNs are gradually expanded through a series of convolution operations. The attention mechanism [44, 36] is, however, different from the convolution operations: (1) the receptive field at each location or token in self-attention  readily covers the entire input space since each token is “matched” with all tokens including itself; (2) the self-attention operation for each pair of tokens computes a dot product between the “query” (the token in consideration) and the “key” (the token being matched with) to weight the “value” (of the token being matched with). Moreover, although the convolution and the self-attention operations both perform a weighted sum, their weights are computed differently: in CNNs, the weights are learned during training but fixed (or gated ) during testing; in the self-attention mechanism, the weights are dynamically computed based on the similarity or affinity between every pair of tokens. As a consequence, the self-similarity operation in the self-attention mechanism provides modeling means that are potentially more adaptive and general than convolution operations. In addition, the introduction of position encodings and embeddings  provides Transformers with additional flexibility to model spatial configurations beyond fixed input structures.
Of course, the advantages of the attention mechanism are not given for free, since the self-attention operation computes an affinity/similarity that is more computationally demanding than linear filtering in convolution. The early development of Transformers has been mainly focused on natural language processing tasks [36, 6, 23] since text is “shorter” than an image, and text is easier to tokenize. In computer vision, self-attention has been adopted to provide added modeling capability for various applications [39, 43, 50]. With the underlying framework increasingly developed [8, 34], Transformers start to bear fruit in computer vision [2, 8] by demonstrating their enriched modeling capabilities.
In the seminal DEtection TRansformer (DETR)  algorithm, Transformers are adopted to perform object detection and panoptic segmentation, but DETR still uses CNN backbones to extract the basic image features. Efforts have recently been made to build image classifiers from scratch, all based on Transformers [8, 34, 38]. While Transformer-based image classifiers have reported encouraging results, performance and design gaps to the well-developed CNN models still exist. For example, in [8, 34], an input image is divided into a single grid of fixed patch size. In this paper, we develop Co-scale conv-attentional image Transformers (CoaT) by introducing two mechanisms of practical significance to Transformer-based image classifiers. The contributions of our work are summarized as follows:
We introduce a co-scale mechanism to image Transformers by maintaining encoder branches at separate scales while engaging cross-layer attention. Two types of co-scale blocks are developed, namely a serial and a parallel block, realizing fine-to-coarse, coarse-to-fine, and cross-scale attention image modeling.
We design a conv-attention module to realize relative position embeddings in the factorized attention module using a convolution-like attention operation that achieves significantly enhanced computation efficiency when compared with vanilla self-attention layers in Transformers.
Our resulting Co-scale conv-attentional image Transformers (CoaT) learn effective representations under a modularized architecture. On ImageNet, CoaT achieves state-of-the-art classification results when compared with the competitive convolutional neural networks (e.g. EfficientNet ), while significantly outperforming the competing Transformer-based image classifiers by a large margin [8, 34, 38].
Our work is inspired by the recent efforts [8, 34] to realize Transformer-based image classifiers. ViT  demonstrates the feasibility of building Transformer-based image classifiers from scratch, but its performance on ImageNet  is not achieved without including additional training data; DeiT  attains results comparable to convolution-based classifiers by using an effective training strategy together with model distillation, removing the data requirement in . Both ViT  and DeiT  are however based on a single image grid of fixed patch size.
The development of our co-scale conv-attentional transformers (CoaT) is motivated by two observations: (1) multi-scale modeling typically brings enhanced capability to representation learning [10, 25, 37]; (2) the intrinsic connection between relative position encoding and convolution makes it possible to carry out efficient self-attention using conv-like operations. As a consequence, the superior performance of the CoaT classifier shown in the experiments comes from two of our new designs in Transformers: (1) a co-scale mechanism that allows cross-layer attention; (2) a conv-attention module to realize an efficient self-attention operation. Next, we highlight the differences of the two proposed modules with the standard operations and concepts.
Co-Scale vs. Multi-Scale. Multi-scale approaches have a long history in computer vision [40, 21]. Convolutional neural networks [16, 15, 10] naturally implement a fine-to-coarse strategy. U-Net  enforces an extra coarse-to-fine route in addition to the standard fine-to-coarse path; HRNet  provides a further enhanced modeling capability by keeping simultaneous fine and coarse scales throughout the convolution layers. In a parallel development  to ours, layers of different scales are in tandem for the image Transformers but  merely carries out a fine-to-coarse strategy. The co-scale mechanism proposed here differs from the existing methods in how the responses are computed and interact with each other: CoaT consists of a series of highly modularized serial and parallel blocks to enable fine-to-coarse, coarse-to-fine, and cross-scale attention on tokenized representations. The joint attention mechanism across different scales in our co-scale module provides enhanced modeling power beyond the standard linear fusion in existing multi-scale approaches [10, 25, 37].
Conv-Attention vs. Attention. Pure attention-based models [24, 13, 50, 8, 34] have been introduced to the vision domain. [24, 13, 50] replace convolutions in ResNet-like architecture by self-attention modules for better local and non-local relation modeling. In contrast, [8, 34] directly adapt the Transformer  for image recognition. Recently, there have been works [1, 4] enhancing the attention mechanism by introducing convolution. LambdaNets  introduces an efficient self-attention alternative for global context modeling and employs a 3-D convolution to realize the relative position embeddings in local context modeling. CPVT  designs 2-D depthwise convolutions as the conditional positional encoding after self-attention. In our conv-attention: (1) we adopt an efficient factorized attention following ; (2) we design a depthwise convolution-based relative position encoding, and (3) extend it to be an alternative case in convolutional position encoding, related to CPVT . Detailed discussion of our network design and its relation with LambdaNets  and CPVT  can be found in Section 4.1 and 4.2.
Transformers take as input a sequence of vector representations (i.e. tokens), or equivalently . The self-attention mechanism in Transformers projects each
into corresponding query, key, and value vectors, using learned linear transformations, , and . Thus, the projection of the whole sequence generates representations . In the scaled dot-product attention from original Transformers :
The softmax outputs can be seen as an attention map from queries to keys. Note that we hide the batch size and the number of attention heads in the actual multi-head self-attention 
for simplified derivation. Self-attention outputs are then passed through a linear layer and a simple feed-forward network, with residual connections and layer-norm applied before and after the feed-forward network.
In vision transformers [8, 34], the input sequence of vectors is formulated by the concatenation of a class token CLS and the flattened feature vectors as image tokens from the feature maps , for a total length of . Due to the high dimensions of natural images in pixels (i.e.
), we cannot afford the computation of applying the self-attention with high resolution because the softmax logits in Equation1 have space complexity and time complexity for the whole sequence. To tackle this computation issue, [8, 34] tokenize the image by patches instead of pixels to reduce the length of sequence. However, the coarse splitting (e.g. 1616 patches) limits the capability of the model to represent details within each patch, which is essential in many vision tasks. To address this dilemma between computation and representation capability, we propose a co-scale mechanism that can enable interaction between different scales to produce a rich representation, with the help of an efficient conv-attentional mechanism that lowers the computation complexity for high-resolution feature maps.
In Equation 1, the materialization of the softmax logits and attention maps leads to the space complexity and time complexity. Inspired by recent works [3, 28, 1] on linearization of self-attention, we approximate the softmax attention map by factorizing it using two functions and compute the second matrix multiplication (keys and values) together:
The factorization leads to a space complexity and time complexity, where both are linear functions of the sequence length . Performer  uses random projections in and for a provable approximation, but with the cost of relatively large . Efficient-Attention  applies the softmax function for both and , which is efficient but causes a significant performance drop on the vision tasks in our experiments. Here, we develop our factorized attention mechanism following LambdaNets  with as the identity function and as the softmax:
where is applied across the tokens in the sequence in an element-wise manner and the projected channels . Different from LambdaNets , we also add the scaling factor back in due to its normalizing effect, bringing better performance. This factorized attention takes space complexity and time complexity. It is noteworthy that the proposed factorized attention following  is not a direct approximation of the scaled dot-product attention, but it can still be regarded as a generalized attention mechanism modeling the feature interactions using query, key and value vectors.
Our factorized attention module mitigates the computational burden from the original scaled dot-product attention. However, because we compute first, can be seen as a global data-dependent linear transformation for every feature vector in the query map . This indicates if we have two query vectors from and , then their corresponding self-attention outputs will be the same:
Without the position encoding, the Transformer is only composed of linear layers and self-attention modules. Thus, the output of a token is dependent on the corresponding input without awareness of any difference in its locally nearby features. This property is unfavorable for vision tasks such as semantic segmentation (e.g. the same blue patches in the sky and the sea are segmented as the same category).
To enable vision tasks, ViT and DeiT [8, 34] insert absolute position embeddings into the input, which may have limitations in modeling the relative relations between local tokens. Instead, following , we can integrate a relative position encoding with window size to obtain the relative attention map in attention formulation if tokens are regarded as a 1-D sequence:
where the encoding matrix has elements:
in which is an indicator function. Each element represents the relation from query to the value within window , and aggregates all related value vectors with respective to query . Unfortunately, the term still requires space complexity and time complexity. In CoaT, we propose to simplify the term to by considering each channel in the query, key and value vectors as internal heads. Thus, for each internal head , we have:
In practice, we can use a 1-D depthwise convolution to compute :
where is the Hadamard product. It is noteworthy that in vision Transformers, we have two types of tokens, the class (CLS) token and image tokens. Thus, we use a 2-D depthwise convolution (with window size and kernel ) and apply it only to the reshaped image tokens (i.e. from respectively):
Based on our derivation, the depthwise convolution can be seen as a special case of relative position encoding.
Convolutional Relative Position Encoding vs Other Relative Position Encoding. The commonly referred relative position encoding  works in the standard scaled dot-product attention settings since the encoding matrix is combined with the softmax logits in the attention maps, which are not materialized in our factorized attention. Related to our work, LambdaNets  attempts to use a 3-D convolution to compute directly, but it costs space complexity and time complexity, which leads to heavy computation when channel size is large. In contrast, our factorized attention computes that only takes space complexity and time complexity, achieving better efficiency than LambdaNets.
We then extend the idea of convolutional relative position encoding to a general convolutional position encoding case. Convolutional relative position encoding models local position-based relationship between queries and values. Similar to the absolute position encoding used in most image Transformers [8, 34], we would like to insert the position relationship into the input image features directly to enrich the effects of relative position encoding. In each conv-attentional module, we insert a depthwise convolution into the input features and concatenate the resulting position-aware features back to the input features following the standard absolute position encoding scheme (see Figure 2 lower part), which resembles the realization of conditional position encoding in CPVT .
CoaT and CoaT-Lite share the convolutional position encoding weights and convolutional relative position encoding weights for the serial and parallel modules within the same scale. We set convolution kernel size to 3 for the convolutional position encoding. We set convolution kernel size to 3, 5 and 7 for image features from different attention heads for convolutional relative position encoding.
The work of CPVT  explores the use of convolution as conditional position encodings by inserting it after the feed-forward network under a single scale (). Our work focuses on applying convolution as relative position encoding and a general position encoding with factorized attention in a co-scale setting.
The final conv-attentional module is shown in Figure 2: We apply the first convolutional position encoding on the image tokens from the input. Then, we feed it into including factorized attention and the convolutional relative position encoding. The resulting map is used for the subsequent feed-forward networks.
We show an illustration of the proposed convolutional relative position encoding strategy in Figure 3. Each query in the image tokens seeks all its nearby values within the window from value image tokens . The product of the query, each obtained value, and corresponding relative position encoding in is then summed to output . To reduce the computation, we consider the relative position encoding map as a convolutional kernel and convolve it with the value image tokens first. Then, we multiply the query with the result of convolution and generate the output . Note that Figure 3 shows a simple case where query, position encoding, and value have a single channel (corresponding to one internal head in Equation 8). For the multi-channel version (corresponding to Equation 9, 10), the multiplication and the convolution in Figure 3 should be replaced by a Hadamard product and a depthwise convolution operation.
|Arch.||Model||#Params||Input||#GFLOPs||Top-1 Acc. @input|
|CoaT-Lite Tiny (Ours)||5.7M||1.6||76.6%|
|CoaT Tiny (Ours)||5.5M||4.4||78.2%|
|CoaT-Lite Mini (Ours)||11M||2.0||78.9%|
|CoaT Mini (Ours)||10M||6.8||80.8%|
|CoaT-Lite Small (Ours)||20M||4.0||81.9%|
|Mask R-CNN w/ FPN 1||Mask R-CNN w/ FPN 3|
|CoaT-Lite Mini (Ours)||30.7||39.2||61.5||42.1||36.0||58.0||38.5||41.6||63.0||45.1||37.6||59.7||40.2|
|CoaT Mini (Ours)||30.2||43.0||64.7||46.8||38.7||61.4||41.6||45.2||65.9||49.6||40.3||63.0||43.1|
|CoaT-Lite Small (Ours)||39.5||43.6||65.3||47.4||39.2||62.0||41.8||44.7||66.0||48.8||40.1||62.4||43.2|
Object detection and instance segmentation results based on Mask R-CNN on the COCO val2017. Two Mask R-CNN with FPN settings (1 for 90k steps and 3 for 270k steps) are compared. We compare CoaT-Lite and CoaT results with ResNet and PVT. PVT results are taken from the reported numbers from  and its official repository.
|Backbone||Deformable DETR (Multi-Scale)|
|DD ResNet-50 ||44.5||-||-||27.6||47.6||59.6|
|DD ResNet-50 (rep.)||44.0||62.9||48.0||26.0||47.4||58.4|
|DD CoaT-Lite Small (Ours)||46.9||66.3||51.0||28.4||50.3||62.5|
The proposed co-scale mechanism is designed to introduce cross-scale attention to image transformers. Here, we describe two types of co-scale blocks in the CoaT architecture, namely serial and parallel blocks.
A serial block (shown in Figure 5) models image representations in a reduced resolution. In a typical serial block, we first down-sample input feature maps by a certain ratio using a patch embedding layer (2D convolution layer), and flatten the reduced feature maps into a sequence of image tokens. We then concatenate image tokens with an additional CLS token, a specialized vector to perform image classification, and apply multiple conv-attentional modules as described in Section 4 to learn internal relationships among image tokens and the CLS token. Finally, we separate the CLS token from the image tokens and reshape the image tokens to 2D feature maps for the next serial block.
We realize cross-scale attention between parallel blocks in each parallel group (shown in Figure 6). In a typical parallel group, we have sequences of input features (image tokens and CLS token) from serial blocks with different scales. To realize fine-to-coarse, coarse-to-fine, and cross-scale attention in the parallel group, we develop two strategies: (1) direct cross-layer attention; (2) attention with feature interpolation. In this paper, we adopt attention with feature interpolation for better empirical performance. The effectiveness of both strategies are shown in Section 6.4.
Direct cross-layer attention. In direct cross-layer attention, we form query, key, and value vectors from input features for each scale. For attention within the same layer, we use the conv-attention (Figure 2) with the query, key and value vectors from current scale. For attention across different layers, we down-sample or up-sample the key and value vectors to match the resolution of other scales. We then perform a cross-attention, which extends the conv-attention with query from current scale and key and value from another scale. Finally, we sum outputs of conv-attention and cross-attention together and applies a shared feed-forward layer. With direct cross-layer attention, the cross-layer information is fused in a cross-attention fashion.
Attention with feature interpolation. Instead of performing cross-layer attention directly, we also present attention with feature interpolation. First, the input image features from different scales are processed by independent conv-attention modules. Then, we down-sample or up-sample image features from each scale to match the other scales’ dimensions using bilinear interpolation or keep the same for its own scale. The features belong to the same scale are summed in the parallel group, and they are further passed into a shared feed-forward layer. In this way, the conv-attentional module in the next step can learn cross-layer information based on the feature interpolation in the current step.
CoaT-Lite, Figure 4 (Left), processes input images with a series of serial blocks following a fine-to-coarse pyramid structure. Given an input image , each serial block down-samples the image features into lower resolution, resulting in a sequence of four resolutions:, , , . In CoaT-Lite, we obtain the CLS token in the last serial block, and perform image classification via a linear projection layer based on the CLS token.
Our CoaT model, shown in Figure 4 (Right), consists of both serial and parallel blocks. Once we obtain multi-scale feature maps from the serial blocks, we pass into the parallel group with three separate parallel blocks. In CoaT, we concatenate the independent CLS tokens for each feature scale, and reduce the channel dimension to perform image classification with the same procedure as CoaT-Lite.
In this paper, we explore CoaT and CoaT-Lite with three different model sizes, namely Tiny, Mini, and Small. Architecture details are shown in Table 1. For example, tiny models represent those with a 5M parameter budget constraint. Specifically, these tiny models have four serial blocks, each with two conv-attentional modules. In CoaT-Lite Tiny architectures, the hidden dimensions of the attention layers increase for later blocks. CoaT Tiny sets the hidden dimensions of the attention layers in the parallel group to be equal, and performs the co-scale attention within the parallel group for six steps. Mini and small models follow the same architecture design but with increased embedding dimensions and increased numbers of conv-attentional modules within blocks.
We perform image classification on the standard ILSVRC-2012 ImageNet dataset . The standard ImageNet benchmark contains 1.3 million images in the training set and 50K images in the validation set, covering 1000 object classes. Image cropping sizes are set to 224224. For fair comparison, we perform data augmentation such as MixUp , CutMix , random erasing , repeated augmentation , and label smoothing , following identical procedures in DeiT  with the exception that stochastic depth is not used .
All experiment results of our models in Table 2
are reported at 300 epochs, consistent with previous methods. We train all models with a global batch size of 2048 with the NVIDIA Automatic Mixed Precision (AMP) enabled. We adopt the AdamW  optimizer with cosine learning rate decay, five warm-up epochs, and weight decay 0.05. The learning rate is scaled as .
We conduct object detection and instance segmentation experiments on the Common Objects in Context (COCO2017) dataset . The COCO2017 benchmark contains 118K training images and 5K validation images. We evaluate the generalization ability of CoaT in object detection and instance segmentation with the Mask R-CNN  framework. We follow the identical data processing settings in Mask R-CNN and enable the feature pyramid network (FPN)  to utilize multi-scale features. In addition, we perform object detection based on Deformable DETR  following its data processing settings, using random horizontal flips, resizing, and cropping as augmentation techniques.
For Mask R-CNN optimization, we train the model with the ImageNet-pretrained backbone on 8 GPUs via SGD with momentum with learning rate 0.02. The training period contains 90K steps in 1 setting and 270K steps in 3 setting following Detectron2 . For Deformable DETR optimization, we train the model with the pretrained backbone for 50 epochs, using an AdamW optimizer with initial learning rate , , and . We reduce the learning rate by a factor of 10 at epoch 40.
Table 2 shows top-1 accuracy results of our models on the ImageNet validation set comparing with state-of-the-art methods. Except for EfficientNets, all reported models are evaluated with image cropping. We separate model architectures into three categories: convolutional networks (ConvNets), attention-based models (non-Transformer), and Transformers. Under around 5M, 10M, and 20M parameter budget constraints, CoaT and CoaT-Lite surpasses all reported Transformer-based architectures (see Table 2). In particular, our CoaT models bring a large performance gain to the baseline DeiT , which shows that our co-scale mechanism is essential to improve the performance of Transformer-based architectures.
Our CoaT Tiny model achieves a 21.8% error rate, a decrease of 6.0% from our DeiT-Tiny baseline. CoaT Tiny also outperforms the strongly competitive model EfficientNet-B0 by 1.1%. CoaT Mini surpasses PVT-Tiny and EfficientNet-B2 by 5.7% and 0.7% with similar model sizes. CoaT-Lite also achieves strong results under the three different model sizes while being competitively fast. Notably, our CoaT-Lite Small model (20M parameters) outperforms all reported Transformer-based architectures using similar or significantly larger model sizes.
We show training curves for CoaT and CoaT-Lite in Figure 7. CoaT converges significantly faster than competing image Transformers while achieving better generalization ability.
Table 3 demonstrates CoaT object detection and instance segmentation results under the Mask R-CNN framework on the COCO val2017 dataset. Our CoaT and CoaT-Lite models show clear performance advantages over the ResNet and PVT backbones under both the 1 setting and the 3 setting. Our CoaT Mini obtains significant performance improvement over CoaT-Lite Mini.
We additionally perform object detection with the Deformable DETR (DD) framework in Table 4. We compare our models with the standard ResNet-50 backbone on the COCO dataset . Our CoaT-Lite Small as the backbone achieves 2.9% improvement on average precision (AP) over the reproduced results of Deformable DETR with ResNet-50 .
We study the effectiveness of the combination of the convolutional position encoding in our Conv-Attention Module in Table 5. Our CoaT-Lite without any convolution position encoding results in poor performance (69.0% top-1 accuracy), indicating position encoding is essential for image transformers. We observe great improvements for CoaT-Lite variants with only either convolutional position encoding (Conv-Pos: Figure 2 bottom) or convolutional relative position encoding (Conv-Rel-Pos: Figure 2 top-right), which achieve 76.0% and 73.5% top-1 accuracy accordingly. We found CoaT-Lite with the combination of Conv-Pos and Conv-Rel-Pos learns a better classifier (76.6% top-1 accuracy), making both position encoding schemes companion rather than conflict.
For both CoaT-Lite and CoaT models, we report models with and without convolutional relative position encoding (Conv-Rel-Pos) in Table 6. We found consistent improvement for both CoaT-Lite and CoaT models that equipping Conv-Rel-Pos. Besides, the Conv-Rel-Pos is able to improve performance at modest increase in computational overhead.
|Size||Model||#Params||Input||#GFLOPs||Top-1 Acc. @input|
|Tiny||CoaT-Lite w/o relative||5.6M||1.6||76.0%|
|CoaT w/o relative||5.5M||4.3||77.3%|
|Mini||CoaT-Lite w/o relative||11M||1.9||78.2%|
|CoaT w/o relative||10M||6.7||80.4%|
|Small||CoaT-Lite w/o relative||19M||3.9||81.4%|
In Table 7, we present performance results for two co-scale variants in CoaT, direct cross-layer attention and attention with feature interpolation. We also report CoaT without co-scale as a baseline. Comparing to CoaT without a co-scale mechanism, both co-scale variants show significant performance improvements. Attention with feature interpolation offers a clear advantage over direct cross-layer attention due to less computational complexity and higher accuracy.
|Size||Model||#Params||Input||#GFLOPs||Top-1 Acc. @input|
|Tiny||CoaT w/o co-scale||5.5M||4.4||76.2%|
|CoaT w/ co-scale|
|- direct cross-layer attention||5.5M||4.8||77.8%|
|- attention with feature interpolation||5.5M||4.4||78.2%|
We show feature and attention visualizations of our proposed CoaT model and DeiT  in Figure 8. For the sampled feature maps shown on the right side of the figure, we directly sample the first six feature maps after different kinds of attention blocks. The visualization of CLS attention maps is done by attending the CLS (query) to all other spatial positions (keys) in the feature map. Note that although our factorized attention mechanism does matrix multiplication between keys and values during training, we are still able to materialize the CLS attention map that resembles the scaled dot-product attention.
Without the coarse-to-fine route, the sampled features in DeiT cannot capture low-level structure features that are essential for downstream tasks. The feature samples from DeiT also show low feature richness, resulting in poor classification performance. In our CoaT visualization, we show high-diversity multi-scale feature maps. From the visualization of serial blocks, we see both high-level abstraction and low-level parts of a dog are captured. Contexts play an important role in object recognition  and semantic labeling . The attention visualization from parallel blocks shows that multi-scale contexts are further mixed to enhance feature richness.
In this paper, we have presented a Transformer based image classifier, Co-scale conv-attentional image Transformer (CoaT), in which cross-scale attention and efficient convolution-like attention operations have been developed. CoaT’s small models attain strong classification results on ImageNet, and their applicability to downstream computer vision tasks have been demonstrated in object detection and instance segmentation.
This work is supported by NSF Award IIS-1717431. Tyler Chang is partially supported by the UCSD HDSI graduate fellowship.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, 1st item, §2, Table 2.