Associating Multi-Scale Receptive Fields for Fine-grained Recognition

05/19/2020 ∙ by Zihan Ye, et al. ∙ 0

Extracting and fusing part features have become the key of fined-grained image recognition. Recently, Non-local (NL) module has shown excellent improvement in image recognition. However, it lacks the mechanism to model the interactions between multi-scale part features, which is vital for fine-grained recognition. In this paper, we propose a novel cross-layer non-local (CNL) module to associate multi-scale receptive fields by two operations. First, CNL computes correlations between features of a query layer and all response layers. Second, all response features are weighted according to the correlations and are added to the query features. Due to the interactions of cross-layer features, our model builds spatial dependencies among multi-level layers and learns more discriminative features. In addition, we can reduce the aggregation cost if we set low-dimensional deep layer as query layer. Experiments are conducted to show our model achieves or surpasses state-of-the-art results on three benchmark datasets of fine-grained classification. Our codes can be found at github.com/FouriYe/CNL-ICIP2020.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1:

Receptive fields of corresponding neurons. We denote the neurons in

-th layer as . Obviously, when a receptive field is over-sized, it contain more background region than a appropriately sized one.

Fine-grained image classification focuses on categorizing very similar subgroups, e.g. the Husky and Alaska in dogs. It has long been considered as a challenging task due to discriminative region localization and discriminative feature learning from those regions. Early works on localization-based methods often adopt part detectors trained on part annotations and then extract part-specific features for fine-grained classification [1, 2, 3, 4]. However, these strongly-supervised methods heavily rely on manual object part annotations, which is too expensive to be prevalently applied in practice. Therefore, the attention mechanism [5, 6, 7, 8] have received increasing attention in recent researches, which can be implemented as one kind of weakly-supervised frameworks that do not need part annotations. [5]

proposes a recurrent attention convolutional neural network that progressively localizes discriminative areas and extracts features from coarse to fine scale.

[6] proposes a soft attenion-based model that generates semantic region features.

Recently, non-local (NL) module [9] is proposed to model the long-range dependencies in one layer, via self-attention mechanism to form an attention map. For each position, the NL module first computes the pairwise relations between current position and all positions, and then aggregates the features of all positions by weighted sum. The aggregated features are added to the feature of each position to form the output. [10, 11] show the NL module could bring excellent improvement in image recognition. However, for fine-grained classification task, multi-scale features are vital because object parts have various sizes and shapes in images [12, 13, 14]

. One NL module models the spatial dependencies in only one convolutional layer, in which neurons have fixed-size receptive fields. The mismatch between the sizes of receptive fields and parts of objects might undermines the feature extract. For example, as shown in Fig.

1 (a), an over-sized receptive fields contain more background region, and sequentially bring more noises. The noises brought by meaningless background have been proven to be harmful for classification [15, 16].

To address the issue, we propose a novel Cross-layer Non-Local module (CNL). We first decompose traditional non-local module into two basic components, query layer and response layer. Then, based on the observation that neurons in different layers has various receptive fields as shown in Fig.1 (b), we associate one query layer with several response layers. Consequently, the query layer could learn more discriminative multi-scale feature from distant response layers. The explicitly learned correlations among all related layers improve the representation power of network.

To the best of our knowledge, we are the first that explore the non-local operation into cross-layer feature fusion for fined-grained classification. We certify that the cross-layer operation could ease the computation complexity of associating receptive fields in theory. We evaluate our CNL method on three datasets of fine-grained classification, i.e. CUB-200-2011 [17], Stanford Dogs [18] and Stanford Cars [19]. Experimental results show that: (1) as a plug-and-play module, CNL needs less cost than NL module and other methods, but surpasses or achieves state-of-the-art performance, (2) CNL can adaptively focus on various parts in multi-scale.

2 Approach

In this section, we first review the original non-local (NL) module. Then, we introduce a general formulation of the NL module and show that the NL is a special case of the formulation. After that, we propose our Cross-layer Non-Local module (CNL) for multi-scale receptive fields association.

2.1 Review of Non-local Module

Figure 2: Architecture of NL module and proposed CNL module. is matrix multiplication, and is element-wise addition. The attention maps are shown by their dimensions, e.g. . NL module associates single-scale features in only one layer. Our proposed CNL module associates the query layer with multi-level response layers to model the dependencies of multi-scale features.

For a feature map in the -th convolutional layer, , and denotes its height, width and the number of channels, respectively. And we denote the feature map as (see notation111Bold capital letters denote matrices, all non-bold letters represent scalars. Superscripts of features denote indices of corresponding layers.

). The row vector of the feature map could be considered as a feature vector extracted from corresponding receptive field. To capture long-range dependencies across the whole feature map

, the original non-local operation first uses two learnable transformations, and , to project into a embedding space. And next, the attention map is computed by a pairwise function in the embedding space. The value of every element in the attention map indicates the affinity of corresponding feature vectors. Finally, the features at all positions are projected by another learnable transformations . In addition, to reduce computation costs, , and would shrink the channel of input feature. The convolution is to maintain the channel consistency between input feature and processed feature . The final result is the weighted sum of the embedding features at all positions:

(1)

There are multiple choices for suggested in[9]. The dot-product is generally chosen to implement . , and are generally implemented as convolution.

2.2 Cross-layer Non-local Module

The NL module is one kind of the self-attention module[20]. We can indicate the input feature as two different symbols in two data flows, respectively, according to their functions. One flow is only inputted to convolution in order to compute the attention map. We denote the feature in this flow as . The other flow is not only inputted to for the attention map, but also is inputted to for the response results. We denote this feature as . In fact, we can rewrite it as a more general two-variables form:

(2)

Obviously, the formulation of original NL module is a special case of the two-variables form. In that case, the query layer is equal to the response layer .

Deep layers are sensitive for large parts since their large-scale receptive fields. Shadow layers do the opposite. According to the fact, we directly compute the affinity among deep features and shadow features. Specially, we consider one deep layer as query layer and associate it with several shadow response layers. In this association way, our model is enhanced in mainly two ways. First, our decision layer could learn more discriminative multi-scale features that may be impacted by meaningless background noises in deep layers. Second, our model could explicitly compute the dependencies of multi-scale image regions.

Now, we give formal definition of our Cross-layer Non-Local module (CNL). Given response layers, we denote as the indices of those layers from shallow to deep. Our CNL operation is expressed as:

(3)

where .

The NL module has a well-known quadratic space complexity, which is mainly from the stored attention map, i.e. . Obviously, the NL module cost is immense when insert it into a shadow layer for low-level vision tasks. For one extreme example, the resolution of the first layer in ResNet-50 [19, 9] is for a image. when we insert NL module after that layer, we need store a attention map, which is unaffordable. Our CNL module avoid the problem. We directly associate shadow feature in deep feature space. In other words, we set the query layer in the last convolutional layer of ResNet-50, in which feature map has a resolution. In that case, the size of our attention map has shrunk to . It is a discount of about 99.4% . Section3.2 shows controlled experiments between CNL and NL module in practical cases.

3 experiment

Datasets #Category # Training # Testing
CUB-200-2011 [17] 200 5,994 5,794
Stanford Dogs [18] 120 12,000 8,580
Stanford Cars [19] 196 8,144 8,041
Table 1: The statistics of fine-grained datasets used in this paper.
Method top1 top5 #param() Flops()
ResNet50 84.05 96.00 2.39 1.64
+5NL 85.10 96.18 3.13 2.48
+5CNL(ours) 85.64 96.84 2.71 2.03
ResNet101 85.05 96.70 4.29 3.15
+5NL 85.53 96.65 5.03 3.96
+5CNL(ours) 86.73 96.75 4.61 3.52
Table 2: Experimental results of ablation. Top1 and top5 accuracy (%) on CUB-200-2011.
Figure 3: Visualization with attention maps for different query positions (red blocks) in CUB fine-grained classification. Column 1: the input image. Column 2 to 6: attention maps from shadow response layers to deep response layers.

3.1 Datasets and training details

We conduct experiments on three challenging fine-grained image classification datasets, i.e. CUB-200-2011 [17], Stanford Dogs [18] and Stanford Cars [19]. The detailed statistics with category numbers and data splits are summarized in Table 1. In our experiments, the input images are resized to

for both training and testing. We train each dataset for 110 epochs with the gradual warmup strategy 

[19] used in first ten epochs. The batch size is set to 64 and the base learning rate is set to 0.01, with decays by 0.9 for the , and

epoch. Our method is implemented with PyTorch and one RTX 8000 GPU. We adopt the ResNet 

[19]

series (ResNet50 and ResNet101) as our candidate baselines. We use the models pretrained on ImageNet 

[21] to initialize the weights. To keep the same setting with [9], we use zero to initialize the weight and bias of the BatchNorm (BN) layer in both NL and CNL blocks [22].

(a) CUB-200-2011 Method Anno. 1-Stage top1 Part-RCNN [23] 81.6 PN-CNN [24] 85.4 DVAN [12] 79.0 B-CNN [25] 84.1 PDFR [4] 84.5 RACNN [5] 85.3 RAM [26] 86.0 DeepLAC [27] 80.3 PA-CNN [28] 82.8 SPDA-CNN [29] 85.1 NAC [3] 81.0 FCAN [30] 84.7 MACNN [31] 86.5 MAMC [6] 86.5 Ours (ResNet101+5CNL) 86.7 (b) Stanford Dogs Method Anno. 1-Stage top1 PDFR [4] 72.0 DVAN [12] 81.5 RACNN [5] 87.3 NAC [3] 68.6 FCAN [30] 84.2 MAMC [6] 85.2 Ours (ResNet101+5CNL) 85.2 (c) Stanford Cars Method Anno. 1-Stage top1 DVAN [12] 87.1 B-CNN [25] 91.3 RACNN [5] 92.5 PA-CNN [28] 92.8 FCAN [30] 89.1 MAMC [6] 93.0 Ours (ResNet101+5CNL) 93.1
Table 3: Comparison results on CUB-200-2011, Stanford Dogs and Stanford Cars datasets. ”Anno.” stands for using extra annotation(bounding box or part) in training. ”1-Stage” indicates whether the training can be done in one stage.

To investigate the receptive field association ability, we insert 5 modules in different layers . Specially, we insert 2 NL modules in , 3 NL modules in at regular interval. For CNL module, we set the same layers as response layers, but set the last layer in as query layer.

3.2 Ablations

We first take the ablation experiments in CUB-200-2011. Table 2 shows quantitative results of networks in different configurations. For ResNet-50, using NL module can offer 1.05% top1 accuracy imporvement compared to the baseline (85.10% vs. 84.05%). With CNL module, the model boosts the accuracy by 0.54% compared to that model with NL module. The similar results could see in the other ablations for ResNet-100. Benefiting from the association in the deep feature space, CNL module has less computation cost then NL module. The ResNet-100 with CNL saves 0.74*10 parameters compared to that one using NL module.

3.3 Comparison with State-of-the-Art

We compare several state-of-the-art results to our methods in Table 3 on these three datasets. For a intuitive comparison, we group those methods according to whether they use extra annotation and whether they use multiple training stages. Our CNL can be trained end-to-end in one stage, and does not need any expensive part annotation.

In general, our method achieve the best score within its all groups. In CUB and Stanford Cars, our method beat the second best method MAMC [6] by 0.2% and 0.1% top1 accuracy, respectively. It is worth to noted that our method only need 11.7% additional Flops, but MAMC needs 23.4% additional computation cost according to its paper. In Stanford Dogs, our method achieve the second best performance. Although RACNN has the excellent result in the dataset, it requires multiple training stages and is tough to be trained end-to-end. Our CNL is easy to implemented, only need little cost, and could bring significant improvement.

3.4 Visualization with Attenion Map

To understand the effects of CNL, we visualize its attention maps as shown in Fig 3. The brighter the pixel is, the more CNL cares about that area. We can see two points: (1) our network can focus on meaningful semantic parts, e.g. sky, edge, eyes and body of birds, (2) our network can adaptively focus on different parts in different response layers. For example, in the second row, the query position is in the eye of the bird. In the second attention map, the position of eye is less noticed than the body positions. However, in the fourth attention map, the eye position receives the highest attention, which further validates the effectiveness of our method.

4 Conclusion

In this paper, we have introduced a general formulation of Non-Local (NL) module, and have proposed a novel Cross-layer Non-Local (CNL) module for fined-grained image recognition. Our model could learn more discriminative features and adaptively focuses on multi-scale image parts by associating cross-layer features. Additionally, our CNL module eases the heavy association cost of original NL operation. Our model needs less computational cost compared to other modules, but produces more competitive results and surpasses state-of-the-art results.

References

  • [1] Zhe Xu, Shaoli Huang, Ya Zhang, and Dacheng Tao, “Augmenting strong supervision using web data for fine-grained categorization,” in CVPR, 2015.
  • [2] Tianjun Xiao, Yichong Xu, Kuiyuan Yang, Jiaxing Zhang, Yuxin Peng, and Zheng Zhang,

    “The application of two-level attention models in deep convolutional neural network for fine-grained image classification,”

    in CVPR, 2015.
  • [3] Marcel Simon and Erik Rodner, “Neural activation constellations: Unsupervised part model discovery with convolutional networks,” in CVPR, 2015.
  • [4] Xiaopeng Zhang, Hongkai Xiong, Wengang Zhou, Weiyao Lin, and Qi Tian, “Picking deep filter responses for fine-grained image recognition,” in CVPR, 2016.
  • [5] Jianlong Fu, Heliang Zheng, and Tao Mei, “Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition,” in CVPR, 2017.
  • [6] Ming Sun, Yuchen Yuan, Feng Zhou, and Errui Ding, “Multi-attention multi-class constraint for fine-grained image recognition,” in ECCV, 2018.
  • [7] Zihan Ye, Fan Lyu, Linyan Li, Yu Sun, Qiming Fu, and Fuyuan Hu, “Unsupervised object transfiguration with attention,” Cognitive Computation, 2019.
  • [8] Zihan Ye, Fan Lyu, Jinchang Ren, Yu Sun, Qiming Fu, and Fuyuan Hu, “Dau-gan: Unsupervised object transfiguration via deep attention unit,” pp. 120–129, 2018.
  • [9] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He, “Non-local neural networks,” in CVPR, 2018.
  • [10] Kaiyu Yue, Ming Sun, Yuchen Yuan, Feng Zhou, Errui Ding, and Fuxin Xu, “Compact generalized non-local network,” in NeurIPS, 2018.
  • [11] Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, and Han Hu, “Gcnet: Non-local networks meet squeeze-excitation networks and beyond,” in CVPR Workshops, 2019.
  • [12] Bo Zhao, Xiao Wu, Jiashi Feng, Qiang Peng, and Shuicheng Yan, “Diversified visual attention networks for fine-grained object classification,” TMM, 2017.
  • [13] Wei Luo, Xitong Yang, Xianjie Mo, Yuheng Lu, Larry S Davis, Jun Li, Jian Yang, and Ser-Nam Lim, “Cross-x learning for fine-grained visual categorization,” in ICCV, 2019.
  • [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” TPAMI, 2015.
  • [15] Shanshan Zhang, Jian Yang, and Bernt Schiele, “Occluded pedestrian detection through guided attention in cnns,” in CVPR, 2018.
  • [16] Yong Luo, Tongliang Liu, Dacheng Tao, and Chao Xu, “Multiview matrix completion for multilabel image classification,” TIP, 2015.
  • [17] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie, “The caltech-ucsd birds-200-2011 dataset,” 2011.
  • [18] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Fei-Fei Li, “Novel dataset for fine-grained image categorization: Stanford dogs,” in CVPR Workshops, 2011.
  • [19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in CVPR, 2016.
  • [20] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” in NeurIPS, 2017.
  • [21] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in CVPR, 2009.
  • [22] Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He, “Accurate, large minibatch sgd: Training imagenet in 1 hour,” arXiv:1706.02677, 2017.
  • [23] Ning Zhang, Jeff Donahue, Ross Girshick, and Trevor Darrell, “Part-based r-cnns for fine-grained category detection,” in ECCV, 2014.
  • [24] Steve Branson, Grant Van Horn, Serge Belongie, and Pietro Perona, “Bird species categorization using pose normalized deep convolutional nets,” arXiv preprint arXiv:1406.2952, 2014.
  • [25] Tsung-Yu Lin, Aruni RoyChowdhury, and Subhransu Maji, “Bilinear cnn models for fine-grained visual recognition,” in ICCV, 2015, pp. 1449–1457.
  • [26] Zhichao Li, Yi Yang, Xiao Liu, Feng Zhou, Shilei Wen, and Wei Xu, “Dynamic computational time for visual attention,” in ICCV Workshops, 2017.
  • [27] Di Lin, Xiaoyong Shen, Cewu Lu, and Jiaya Jia, “Deep lac: Deep localization, alignment and classification for fine-grained recognition,” in CVPR, 2015.
  • [28] Jonathan Krause, Hailin Jin, Jianchao Yang, and Li Fei-Fei, “Fine-grained recognition without part annotations,” in CVPR, 2015.
  • [29] Han Zhang, Tao Xu, Mohamed Elhoseiny, Xiaolei Huang, Shaoting Zhang, Ahmed Elgammal, and Dimitris Metaxas, “Spda-cnn: Unifying semantic part detection and abstraction for fine-grained recognition,” in CVPR, 2016.
  • [30] Xiao Liu, Tian Xia, Jiang Wang, Yi Yang, Feng Zhou, and Yuanqing Lin, “Fully convolutional attention networks for fine-grained recognition,” arXiv:1603.06765, 2016.
  • [31] Heliang Zheng, Jianlong Fu, Tao Mei, and Jiebo Luo, “Learning multi-attention convolutional neural network for fine-grained image recognition,” in ICCV, 2017.