Log In Sign Up

Three-branch and Mutil-scale learning for Fine-grained Image Recognition (TBMSL-Net)

by   Fan Zhang, et al.

ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is one of the most authoritative academic competitions in the field of Computer Vision (CV) in recent years, but it can not achieve good result to directly migrate the champions of the annual competition, to fine-grained visual categorization (FGVC) tasks. The small interclass variations and the large intraclass variations caused by the fine-grained nature makes it a challenging problem. The proposed method can be effectively localize object and useful part regions without the need of bounding-box and part annotations by attention object location module (AOLM) and attention part proposal module (APPM). The obtained object images contain both the whole structure and more details, the part images have many different scales and have more fine-grained features, and the raw images contain the complete object. The three kinds of training images are supervised by our three-branch network structure. The model has good classification ability, good generalization and robustness for different scale object images. Our approach is end-to-end training, through the comprehensive experiments demonstrate that our approach achieves state-of-the-art results on CUB-200-2011, Stanford Cars and FGVC-Aircraft datasets.


Mask-CNN: Localizing Parts and Selecting Descriptors for Fine-Grained Image Recognition

Fine-grained image recognition is a challenging computer vision problem,...

Attention for Fine-Grained Categorization

This paper presents experiments extending the work of Ba et al. (2014) o...

Automatic Fine-grained Glomerular Lesion Recognition in Kidney Pathology

Recognition of glomeruli lesions is the key for diagnosis and treatment ...

Deep Learning for Fine-Grained Image Analysis: A Survey

Computer vision (CV) is the process of using machines to understand and ...

Learning to Navigate for Fine-grained Classification

Fine-grained classification is challenging due to the difficulty of find...

Fine-grained Image-to-Image Transformation towards Visual Recognition

Existing image-to-image transformation approaches primarily focus on syn...

1 Introduction

Figure 1:

The overview of our proposed TBMSL-Net, The red branch is raw branch, the orange branch is object branch, and the blue branch is part branch. The CNN (Convolutional Neural Networks) and FC (Fully Connection) layer of the same color represent parameter sharing. Our three branches all use cross entropy loss as the classification loss.

Is the dog a Husky or an Alaska? This is often an argument for people who have similar general characteristics. The FGVC direction of CV research focuses on such issues, and FGVC is also called Sub-Category Recognition. In recent years, it is a very popular research topic in CV, pattern recognition and other fields. It’s purpose is to make a more detailed subclass division for coarse-grained large categories (e.g., bird species).

In the past few years, the accuracy of benchmark on open datasets has been improved based on deep learning and fine-grained classification methods, which can be classified as follows: 1) By end-to-end feature encoding; 2) By localization-classification subnetworks. The first kind of method directly learns a more discriminative feature representation by developing powerful deep models for fine grained recognition. The most representative method among them is

[8], propose bilinear models, a recognition architecture, that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor. It achieves clear performance improvement on a wide range of visual tasks. However, bilinear feature dimensions are high, usually orders of magnitude in the hundreds of thousands to millions, which limits its further generalization. [9], [10] are improvements. The second kind of method, localization subnetwork, locates key part regions that are beneficial to classification through supervised or weakly supervised methods, and then the classification subnetwork uses the information of fine-grained regions captured by the localization subnetwork to further enhance the learning capability of the classification subnetwork. This method significantly boosts the final recognition accuracy. [11], [12], [13], etc. are used for full-supervision methods in addition to image-level annotation. [13] trained a region proposal network to proposal multiple pieces of information with the help of part annotation Image parts, and then fuse all the proper part region features for fine-grained recognition. However, maintaining such dense part annotations is labor-intensive, which limits both scalability and practicality of real-world fine-grained applications. Therefore, [14], [15], [16], and [17] only use image-level annotation to avoid these problems.

An overview of the proposed TBMSL-Net is shown in Fig. 1. Our method has three branch in training phase, model through raw branch mainly study the overall characteristics of the object, and AOLM needs to obtain the object’s bounding box information with the help of the feature maps of the raw image output by this branch. As the input of object branch, the scale of object image is very helpful for classification, because it contains the structural features of the target as well as the fine-grained features. Then, APPM finds out the location information of several part regions with the most discrimency and less redundancy with each other according to the feature maps of object image. Part branch sends these part images of the finer scale down from the crop of object image to the network training, allows the network to learn an object ’s different parts region fine-grained feature of different scales. Unlike [15], the parameters of CNN and FC in our three branches are shared. Therefore, through the common learning process of the three branches, the trained model has a good classification ability for different scales and parts of object. In the testing phase, unlike [16] and [15]

need to compute multiple part region images feature vector and concat later after all connections to classify. After our repeated experiments, the best input for classification is the object image obtained through AOLM. Our method only needs the model to predict the object image, which can reduce some calculations while achieving good accuracy.

Our main contributions can be summarized as follows: 1. Our three-branch network framework can be trained end-to-end and can learn object’s discriminative regions for recognition efficiently. 2. Our AOLM does not increase the number of network parameters, so we do not need to train a proposal network like [15]. The accuracy of object location is achieved by only using category labels under the training method of weak supervision. 3. We present an APPM method without part annotations, selecting multiple ordered discriminative part images, so that the model can efficiently learn the different scales of the target and the fine-grained features of the parts. 4.State-of-the-art performances are reported on three standard benchmark datasets, where our method stable outperforms existing methods.

2 Our Approach

Attention Object Location Module (AOLM). This method was inspired by SCDA[18], and then we improve it’s positioning performance as much as possible through a series of measures. At the first, we describe the process of generating object location coordinates by processing the CNNs feature map as the Fig. 2 illustrated. We use to represent feature maps with channels and spatial size output from the last convolutional layer of an input image . As shown in Equ 1,

activation map can be obtained by aggregating the feature maps . It visualizes where the deep neural networks focus on for recognition validly and locates the object regions accurately. , the mean value of , is used as the threshold to determine whether the element at that position in is a part of object, and is a particular position in a activation map. As represented in Equ 2,

Thus, we initially obtained a coarse mask map from the last convolutional layer of Resnet-50[4]. According to the experimental results, we find that the object is often in the largest connected component of , so the smallest bounding box containing the connected area is used as the result of our object location. Only using a pre-trained VGG16[3] in SCDA[18] achieved better position accuracy, but our pre-trained Resnet-50[4] does not reach a similar accuracy rate and dropped significantly. So we use the training set to train Resnet-50[4] for improving object location accuracy. Then, inspired by [19] and SCDA[18], the performance of their methods all benefit from the ensembel of mutilple layers. So we get the activation map of the output of according to Equ 1, which is one block in front of . Then we can get according to Equ 2, and finally we can get a more accurate mask after taking the intersection of and . After that, is resized to the size of the input image and then overlay it onto the input image . Visual images (as shown in Fig. 4 ) and experimental results prove the effectiveness of these methods to improve object location accuracy. Our weakly supervised object location method can achieve higher location accuracy than ACOL[20] and ADL[21], etc. without adding trainable parameters, we will demonstrate it’s performance in the next section.

Figure 2: Pipeline of the AOLM.

Attention Part Proposal Module(APPM). By observing the activation map , we can find that the higher the activation value of the activation map (shown in Fig. 3), the more often the area where the discerning part is located, such as the head area in the example. Using the idea of sliding window in object detection to find the window with information as part image, and like Overfeat[22]. We implemented the traditional sliding window approach with a full convolutional network, only a raw image is input into CNN calculation to reduce the amount of calculation. Then the activation mean value of the activation map obtained by aggregating in the channel dimension according to the feature map of each corresponding window. As shown in Equ 3,

, are the height and width of the feature map output by the part image after network calculation. We sort by the value of all windows. The larger the is, the larger the informativeness of this part region is. However, we cannot directly select the first few windows, because they are often adjacent to the largest windows and contain the same part, and we hope to select as many different parts as possible with a fixed number of windows selected. In order to reduce region redundancy, we adopt non-maximum suppression (NMS) to select a fixed number of windows as part image for different scale and ratio windows. By visualizing the output of this module in Fig. 4, it can be seen that this method proposed some ordered, different importance degree part regions.

Figure 3: Simple pipeline of the APPM.

Architecture of TBMSL-Net. In order to make the model fully and efficiently learn the images obtained through AOLM and APPM. During the training phase, we construct a three-branch network structure consisting of Raw branch, object branch, and part branch, as shown in fig. 1

. The three branches share a CNN for feature extraction and a FC layer for classification. Our three branches all use cross entropy loss as the classification loss. As shown in Equ 4, 5, and 6, respectively.

Where is the ground-turth label of the input image, ,

are the category probabilities of the last softmax layer output of the raw branch and object branch, respectively ,

is the output of the softmax layer of the part branch corresponding to the nth part image, is the number of part images. The total loss is defined as:

where , and

are hyper-parameters. The total loss is the sum of the losses of the three branches, which work together to optimize the performance of the model during backpropagation. It enables the final convergent model to make classification predictions based on the overall structural characteristics of the object or the fine-grained characteristics of a part. The model has good object scale adaptability, which improves the robustness in the case of inaccurate AOLM location. During the testing phase, we removed the part branch so as to reduce a large amount of calculations, so our method will not take too long to predict in practical applications. Due to our reasonable and efficient framework, our method achieves the best performance so far.

3 Experiments

Datasets. We comprehensively evaluated the performance of our algorithm on the bird dataset CUB-200-2011 (CUB), the vehicle dataset Stanford Cars (CAR), and the aircraft dataset FGVC-Aircraft (AIR). These three datasets are widely used as benchmarks for fine-grained classification(shown in Table 1). We only use the image classification labels provided by these datasets.

Datasets Class Train Test
CUB-200-2011 200 5994 5794
Stanford Cars 196 8144 8041
FGVC-Aircraft 100 6667 3333
Table 1: Introduction of the three datasets used in this paper.

Implementation Details. In all our experiments, we first preprocess images to size to get input image for raw images, object branch. The object image is also scaled into , but all part images are resized to . , and are all 1.We construct windows to have three broad categories of scales of {[, ], [, ], [, , , ]}, the number of a raw image’s part images is , among them . Resnet-50[4]

pre-trained on ImageNet is used as the backbone of our network structure. During training and testing, we do not use any other annotations other than image-level labels. Our optimizer is SGD with the momentum of 0.9 and the weight decay of 0.0001, and a mini-batch size of 6 on a Tesla V100 GPU. The initial learning rate is 0.001 and mutilplied by 0.1 after 60 epoch. We use Pytorch as our code-base.

Performance Comparison. We compared the baseline methods mentioned above on three commonly used fine-grained classification datasets. The experimental results are shown in the Table 2. By comparison, we can see that our method achieves the best accuracy currently available on these three datasets.

Methods Accuracy(%)
Bilinear-CNN[8] 84.1 84.1 91.3
KP[8] 86.2 86.9 92.4
RA-CNN[8] 85.3 - 92.5
MA-CNN[14] 86.5 89.9 92.8
OSME+MAMC[23] 86.5 - 93.0
PC[24] 86.9 89.2 92.9
DFL-CNN[25] 87.4 92.0 93.8
HSnet[14] 87.5 - -
NTS-Net[14] 87.5 91.4 93.9
DCL[26] 87.8 92.2 94.5
TASN[17] 87.9 - 93.8
Ours 89.6 94.5 94.7
Table 2: Comparison results on three common datasets.

Ablation Studies. The ablation study is performed on the CUB dataset. Without adding any of our proposed methods, the ResNet-50[4] obtained an accuracy of 84.5% under the condition that the input image resolution is . In order to verify the rationality of the training structure of our three branches, we remove the object branch and part branch respectively. After removing the object branch, , and the best accuracy of the remainder branches is from the raw branch is 85.0%, a drop of 4.6%. After removing the part branch, , the best accuracy of the rest of branches is from the object branch is 87.3%, down 2.3% significantly. Through the above experiments, it is shown that the three branches of our method all have a significant contribution to the final accuracy.

Object Localization performance. Percentage of Correctly Localized Parts (PCP) metric is the images whose bounding boxes have more than 50% IoU with the ground truth. On the CUB dataset, the best result available in terms of the PCP metric for object localization is 85.1%. AOLM clearly exceeds the recent weakly supervised object location methods ACOL’s 46.0% and ADL’s 62.3%. As shown in Table 3, the ensemble of multiple layers significantly improves object location accuracy. Through the experiment, we found that the object location accuracy of the direct use of the pre-training model was 65.0%, while that after one iteration of training was 85.1%, and as the training progressed, CNN paid more and more attention to the most discerning regions to improve classification accuracy which leads to localization accuracy decayed by 71.1%. However, because the model has good scale adaptability to object’s scale, it still achieved excellent classification results.

epochlayer &
0 59.9% 65.0%
1 82.2% 85.1%
final 68.0% 71.1%
Table 3: Object localization accuracy on CUB-200-2011.

Visualization of Object and Part Regions. In order to visually analyze our AOLM and APPM’s areas of interest, we draw the object’s bounding boxes and part regions by APLM and APPM in Fig. 4. In the first column, we use red and green rectangles to denote the Ground Truth and predicted bounding box in raw image by AOLM. It can be clearly seen from the raw images that the positioned target area can often cover an almost complete target, which is very helpful for classification. In columns two through four, we use red, orange, yellow, green rectangles to denote the top number informative regions in different scales proposed by APPM. It can be seen that the proposed area does contain more fine-grained information and the order is more reasonable on the same scale, which are very helpful for model’s robustness to scale. We can see that the most discriminative regions of birds are head firstly , then is body, which is similar to human cognition.

Figure 4: Visualization of object and part regions.

4 Conclusion

In this paper, we propose a powerful method for fine-grained classification without the need of bounding box/part annotations. The three-branch structure can make full use of the images obtained by AOLM and APPM to train the classification model with excellent performance. Our algorithm is end-to-end trainable and achieves state-of-the-art results in CUB-200-2001, FGVC Aircraft and Stanford Cars datasets.The future work is Setting the number and size of windows adaptively to further improve the classification accuracy.


  • [1] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” IJCV, vol. 115, no. 3, pp. 211–252, 2015.
  • [2] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012, pp. 1097–1105.
  • [3] Karen Simonyan and Andrew Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778.
  • [5] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The Caltech-UCSD Birds-200-2011 Dataset,” Tech. Rep., 2011.
  • [6] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei, “3d object representations for fine-grained categorization,” in 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
  • [7] S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi, “Fine-grained visual classification of aircraft,” Tech. Rep., 2013.
  • [8] Tsung-Yu Lin, Aruni RoyChowdhury, and Subhransu Maji, “Bilinear cnn models for fine-grained visual recognition,” in CVPR, 2015, pp. 1449–1457.
  • [9] Yang Gao, Oscar Beijbom, Ning Zhang, and Trevor Darrell, “Compact bilinear pooling,” in CVPR, 2016, pp. 317–326.
  • [10] Yin Cui, Feng Zhou, Jiang Wang, Xiao Liu, Yuanqing Lin, and Serge Belongie, “Kernel pooling for convolutional neural networks,” in CVPR, 2017, pp. 2921–2930.
  • [11] Ning Zhang, Jeff Donahue, Ross Girshick, and Trevor Darrell, “Part-based r-cnns for fine-grained category detection,” in ECCV. Springer, 2014, pp. 834–849.
  • [12] Steve Branson, Grant Van Horn, Serge Belongie, and Pietro Perona, “Bird species categorization using pose normalized deep convolutional nets,” arXiv preprint arXiv:1406.2952, 2014.
  • [13] Michael Lam, Behrooz Mahasseni, and Sinisa Todorovic, “Fine-grained recognition as hsnet search for informative image parts,” in CVPR, 2017, pp. 2520–2529.
  • [14] Heliang Zheng, Jianlong Fu, Tao Mei, and Jiebo Luo, “Learning multi-attention convolutional neural network for fine-grained image recognition,” in ICCV, 2017, pp. 5209–5217.
  • [15] Jianlong Fu, Heliang Zheng, and Tao Mei, “Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition,” in CVPR, 2017, pp. 4438–4446.
  • [16] Ze Yang, Tiange Luo, Dong Wang, Zhiqiang Hu, Jun Gao, and Liwei Wang, “Learning to navigate for fine-grained classification,” in ECCV, 2018, pp. 420–435.
  • [17] Heliang Zheng, Jianlong Fu, Zheng-Jun Zha, and Jiebo Luo, “Looking for the devil in the details: Learning trilinear attention sampling network for fine-grained image recognition,” in CVPR, 2019, pp. 5012–5021.
  • [18] Xiu-Shen Wei, Jian-Hao Luo, Jianxin Wu, and Zhi-Hua Zhou,

    “Selective convolutional descriptor aggregation for fine-grained image retrieval,”

    IEEE T IMAGE PROCESS, vol. 26, no. 6, pp. 2868–2881, 2017.
  • [19] Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, 2015, pp. 3431–3440.
  • [20] Xiaolin Zhang, Yunchao Wei, Jiashi Feng, Yi Yang, and Thomas S Huang, “Adversarial complementary learning for weakly supervised object localization,” in CVPR, 2018, pp. 1325–1334.
  • [21] Junsuk Choe and Hyunjung Shim, “Attention-based dropout layer for weakly supervised object localization,” in CVPR, 2019, pp. 2219–2228.
  • [22] Pierre Sermanet, David Eigen, Xiang Zhang, Michaël Mathieu, Rob Fergus, and Yann LeCun, “Overfeat: Integrated recognition, localization and detection using convolutional networks,” arXiv preprint arXiv:1312.6229, 2013.
  • [23] Ming Sun, Yuchen Yuan, Feng Zhou, and Errui Ding, “Multi-attention multi-class constraint for fine-grained image recognition,” in ECCV, 2018, pp. 805–821.
  • [24] Abhimanyu Dubey, Otkrist Gupta, Pei Guo, Ramesh Raskar, Ryan Farrell, and Nikhil Naik, “Pairwise confusion for fine-grained visual classification,” in ECCV, 2018, pp. 70–86.
  • [25] Yaming Wang, Vlad I Morariu, and Larry S Davis, “Learning a discriminative filter bank within a cnn for fine-grained recognition,” in CVPR, 2018, pp. 4148–4157.
  • [26] Sue Chen, Yalong Bai, Wei Zhang, and Tao Mei, “Destruction and construction learning for fine-grained image recognition,” in CVPR, 2019, pp. 5157–5166.