Weakly Supervised Fine-Grained Image Categorization

04/20/2015 ∙ by Yu Zhang, et al. ∙ Nanjing University University of Illinois at Urbana-Champaign Nanyang Technological University 0

In this paper, we categorize fine-grained images without using any object / part annotation neither in the training nor in the testing stage, a step towards making it suitable for deployments. Fine-grained image categorization aims to classify objects with subtle distinctions. Most existing works heavily rely on object / part detectors to build the correspondence between object parts by using object or object part annotations inside training images. The need for expensive object annotations prevents the wide usage of these methods. Instead, we propose to select useful parts from multi-scale part proposals in objects, and use them to compute a global image representation for categorization. This is specially designed for the annotation-free fine-grained categorization task, because useful parts have shown to play an important role in existing annotation-dependent works but accurate part detectors can be hardly acquired. With the proposed image representation, we can further detect and visualize the key (most discriminative) parts in objects of different classes. In the experiment, the proposed annotation-free method achieves better accuracy than that of state-of-the-art annotation-free and most existing annotation-dependent methods on two challenging datasets, which shows that it is not always necessary to use accurate object / part annotations in fine-grained image categorization.



There are no comments yet.


page 2

page 3

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Fine-grained image categorization has been popular during the past few years. Different from traditional image recognition such as scene or object recognition, fine-grained categorization deals with images with subtle distinctions, which usually involves the classification of subclasses of objects belonging to the same class like birds [26, 1], dogs [16], planes [25], plants [19, 23], etc. Therefore, it requires methods that are more discriminative than traditional image classification.

One important common feature of existing fine-grained methods is that they explicitly use annotations of the object or even object parts to depict an object as precise as possible. Most of them heavily rely on object / part detectors to find the part correspondence among objects.

For example, in [8, 33], the poselet [2] is used to detect object parts. Then, each object is represented with a bag of poselets, and suitable matchings among poselets (parts) could be found between two objects. Instead of using poselets, [34] used the deformable part models (DPM) [9] for object part detection. DPM is learned from the annotated object parts in training objects, which is then applied on testing objects to detect parts. Some works [10, 13] transfer the part annotations from objects in training images to those sharing similar shapes in testing images instead of applying object / part detectors. Instead of seeking precise part localization, [10] also provided an unsupervised object alignment technique, which roughly aligns objects and divides them into corresponding parts along certain directions. It achieves better results than the label transfer method. Recently, [32] proposed to use object and part detectors with powerful CNN feature representations [6], which achieves state-of-the-art results on the Caltech-UCSD Birds (CUB) 200-2011 [26] dataset. The geometric relation between an object and its parts are considered in [32]. [35] also shows that part-based models with CNN features is able to capture subtle distinctions among objects. Some other works [5, 27] recognize fine-grained images with human in the loop.

In order to achieve accurate part detection, most existing works require the annotated bounding boxes for objects, in both training and testing stages. As pointed out in [32], such a requirement is not so realistic for practical usage. Thus, a few works [32, 6] have looked into a more realistic setup, i.e., only utilize the bounding box in the training stage but not in the testing stage. However, even with such a setup, it is still hard for wide deployment of these methods since accurate object annotations needed in the training stage are usually expensive to acquire, especially for large-scale image classification problems. It is an interesting research problem to free us from the dependency to detailed manual annotations in fine-grained image categorization tasks. [28]

has shown promising results without using manual annotations. They try to detect accurate objects and parts with complex deep learning models for fine-grained recognition.

In this paper, we aim at categorizing fine-grained images with only category labels and without any bounding box annotation in both training and testing stages, while not degrading the categorization accuracy. This is a big step towards making fine-grained image categorization suitable for wide deployments. In existing annotation-dependent works, representative parts like head and body in birds [32] have shown to play the key role in capturing the subtle difference of fine-grained images. They are different from general image recognition, which usually uses a holistic image representation. In this paper, we are going to select the most important parts from multiple part proposals in each image in the annotation-free scenario. The part proposals are the sub-regions of object proposals in each image, which are shown in Fig. 1. The part selection process is important in an annotation-free fine-grained image categorization system for at least two reasons. First, many (if not most) part proposals are noise and not useful for categorization. Second, accurate part detectors can be hardly acquired without access to detailed object and part annotations, including groundtruth exact object and part locations. Existing accurate part detectors (e.g., [32] ) are annotation-dependent, different from our annotation-free setup.

We propose to select many useful parts from multi-scale part proposals of objects in each image and compute a global image representation for it, which is then used to learn a linear classifier for image categorization. In this image representation, we believe that to select many useful parts is better than one exact part, because it is very difficult to determine the exact object / part location in the image in our annotation-free scenario. Multiple useful parts can compensate each other to provide more useful information in characterizing the object. The proposed representation achieves better accuracy than the annotation-free work [28] and even most existing annotation-dependent methods on two challenging benchmark datasets, the Caltech-UCSD Birds 200-2011 [26] and the StanfordDogs [16] datasets. Its success suggests that it is not always necessary to learn expensive object / part detectors in fine-grained image categorization.

Fig. 1 gives an overview on how we generate a global representation for each image through selected parts. The framework in Fig. 1 consists of three major steps: part proposal generation, useful part selection, and multi-scale image representation.

Figure 1: System overview. This figure is best viewed in color.
  • In the first step, we extract object proposal which are image patches that may contain an object. Then, multi-scale part proposals are extracted from object proposals in each image. An efficient multi-max pooling (MMP) strategy is proposed to generate features for multi-scale part proposals by leveraging the internal structure of CNN on object proposals. Within the large number of part proposals, most of them are from background clutters, which are harmful to categorization.

  • Thus, in the second step, we select useful part proposals from each image by exploring useful information in part clusters (all part proposals are clustered). For each part cluster, we compute an importance score for it, indicating how important is this cluster for our fine-grained task. Those part proposals assigned to useful clusters (i.e., those with largest importance scores) are selected as useful parts.

  • Finally, the selected part proposals in each image are encoded into a global image representation. In order to highlight the subtle distinction among fine-grained objects, we encode the selected parts on different scales separately, which we name as SCale Pyramid Matching (ScPM). It provides a better discriminance than encoding all parts altogether in one image.

With the proposed annotation-free fine-grained image representation, we can detect the key (most discriminative) parts in objects for different classes, whose results coincide well with rules used by human experts (e.g., the yellow-throated vireo and black-capped vireo differs because yellow-throated vireo has yellow throat while black-capped vireo has black head, cf. Fig. 2).

Figure 2: Black-capped Vireo and Yellow-throated Vireo. They have the most distinctive parts in multiple part proposals: yellow throat and black cap, respectively, which are specified in red boxes. On the right, we show the key parts detected using the proposed representation from the two species. More examples of detected discriminative parts can be found in Fig. 5. This figure is best viewed in color.

2 Fine-grained image representation without using object / part annotations

The three modules in the proposed method (part proposal generation, part selection, and multi-scale image representation) are detailed in Sections 2.12.3, respectively.

2.1 Part proposal generation

Regional information has been shown to improve image classification with hand-crafted methods like spatial pyramid matching [17] and receptive fields [15]. When a CNN model is applied on an image, features of local regions can be acquired automatically from its internal structure. Assume the output from a layer in CNN is dimension, which is the output of filters for spatial cells. Each spatial cell is computed from a receptive field in the input image. The receptive fields of all the spatial cells in the input image can highly overlap with each other. The size of one receptive field can be computed layer by layer in CNN. In a convolution (pooling) layer, if the filter (pooling) size is and the step size is , then cells in the output of this layer corresponds to cells in the input of this layer. For example, one cell in the conv5

(the 5th convolutional) layer of CNN model (imagenet-vgg-m) 

[4] corresponds to a receptive field (it is assumed to reside in the image completely) in the input image (cf. Fig. 3).

Figure 3: Receptive fields computed using the CNN model (imagenet-vgg-m) [4]. One cell in the conv5 layer corresponds to a receptive field in the input image. We only show the spatial size of the image and filters. is the filter (pooling) size, ‘st’ means the step size.

The spatial cells in one CNN layer correspond to receptive fields with a fixed size, which are not comprehensive enough to characterize objects of different scales in images. Some efforts have been made to solve this problem. [20] applied multiple convolutional filters of different sizes in the conv5 layer of CNN to generate multi-scale part mappings for object detection. [30]

applied CNN model on images resized to different scales and pool the final outputs using the Fisher vector method. These methods, however, are more time consuming than the original CNN computations.

We generate features of multi-scale receptive fields for an image by leveraging the internal outputs of CNN with little additional computational cost (cf. Fig. 4). Considering the outputs of one layer in a CNN, we can pool the activation vectors of adjacent cells of different sizes, which corresponds to receptive fields with different sizes in the input image. Max-pooling is used here.

Given the cells in one layer in CNN, we use max-pooling to combine information from all adjacent cells, where ranges from 1 (single cell) to (all the cells). When is assigned to different values, the corresponding cells can cover receptive fields of different sizes in the input image, thus providing more comprehensive information. We name the proposed part proposal generation strategy as multi-max pooling (MMP) and apply it to the conv5 layer, because the conv5 layer can capture more meaningful object / part information than those shallow layers in CNN [31]. When a CNN model is applied on an object bounding box in an image, the acquired receptive fields from MMP can be seen as the part candidates for an object, which provides a comprehensive understanding for fine-grained objects.

Figure 4: Generating multi-scale part proposals. The input is an object proposal. By applying CNN on it, spatial cells of different sizes on the conv5 layer in CNN correspond to parts of different scales. This figure is best viewed in color.

Part proposals are important for fine-grained image categorization, which can provide fine-grained information of objects. Object proposals generated by objectness methods like selective search [24] are not fine-grained enough to characterize the internal structure of fine-grained objects. [32] only used such object proposals to detect parts with the help of object / part annotations, which leads to worse performance than our annotation-free method using part proposals (cf. Table 2).

MMP is an efficient way to generate multi-scale part proposals to characterize fine-grained objects. It can be easily applied on millions or even billions object proposals in a dataset. Unlike [28], where the outputs of conv4 in CNN are used as parts, MMP provides dense coverage on different scales from part level to object level for each object proposal. The large number of part proposals provide us more opportunity to mine subtle useful information of objects.

2.2 Part selection

We propose to select useful parts from global image representations. To get the image representation, we first generate object proposals from each image. Since no object / part annotations are provided, we could only use unsupervised object detection methods. Considering the efficiency, selective search [24] is used in our framework, which has also been used in [11, 32] to generate initial object / part candidates for object detectors. After generating multiple object proposals, we apply CNN model on each detected bounding box / object proposal, and use the proposed MMP to get part proposals.

Among the object proposals, most of them are from background clutters, which are harmful for image recognition. For example, in the CUB200-2011 [26] dataset, when we use the intersection over union criteria, only object proposals cover the foreground object. The part proposals from those unsuccessful object proposals will contribute little to the classification, or even be noisy and harmful. Thus, we need to select those useful part proposals (covering the foreground object) but without using groundtruth annotations for our image representation.

We select useful parts through mining the useful information in part clusters. We first cluster all part proposals into several groups. Then, we compute the importance of each cluster for image classification. Those part proposals assigned to the useful clusters (clusters with largest importance values) are selected as the useful parts.

We compute the cluster importance with the aid of Fisher vector (FV) [22].111VLAD can be used in our framework, which is used in [12] to encode CNN of multiple spatial regions for general image classification. We choose FV because it has a better discriminance than VLAD [14]. We first encode all the part proposals in each image into a FV. Then, for each dimension in FVs of all training images, we compute its importance using its mutual information (MI) with the class labels [36]. Finally, the cluster importance is the summation of the MI values of all FV dimensions in it. We only keep those FV dimensions from the most important clusters for image categorization.

To the best of our knowledge, explicit part proposal and selection is for the first time proposed for fine-grained image categorization, in an annotation-free setup. As will be shown in Sec. 4, this novel strategy greatly improves categorization accuracy, even when object or part annotations are not used at all. Part selection can automatically explore those parts which are important for categorization by using image labels. It is more efficient and practical than trying to learn explicit part detectors without groundtruth object / part annotations. [28] also worked on fine-grained categorization without object / part annotations, which costs much more computation than ours. [28] used two CNN models to detect interesting objects and further learn accurate part detectors from them. In contrast, we only need to select important parts from all part proposals, which are generated by applying one CNN model on object proposals. More importantly, our method shows that without explicitly detecting the fine-grained objects / parts, the proposed image representation can acquire a better discriminance than [28] (cf. Table 2).

2.3 Multi-scale image representation

We select important part clusters for parts on different scales separately. Aggregating part proposals from different scales altogether into a single image representation cannot highlight the subtle distinction in fine-grained images. Thus, we propose to encode part proposals in an image on different scales separately and we name it as SCale Pyramid Matching (ScPM). For part proposals on different scales, we compute separate FVs. In practice, the scale number can be very large ( in the CNN model in our paper), which may lead to severe memory problem. Since the part proposals on neighboring scales are similar in size, we can divide all the scales into groups . For an image , the part proposals belonging to the scale group are used to compute one FV as the following:



are respectively the mixture weights, mean vectors, and standard deviation vectors of the

-th selected diagonal Gaussian in the -th scale group . are the selected part proposals in an image, is the scale index of the -th part and is the weight of the -th instance to the -th Gaussian in the -th scale group. Following [22], two parts corresponding to the mean and the standard deviation in each Gaussian of FV are used. Each of the FVs is power and normalized independently, and then concatenated to represent the whole image as :222The source code will be published.


ScPM is different from the Multi-scale Pyramid Pooling (MPP) method in [30]. On one hand, MPP encodes local features from images resized on different scales into separate FVs, and aggregate all the FVs into one to represent an image. Such aggregation may not highlight the subtle difference of object parts on different scales, which is especially important in fine-grained objects within complex backgrounds. On the other hand, ScPM automatically selects different number of important part clusters on different scales. The FV representations from all scales do not have the same length and cannot be aggregated as MPP.

3 Understand subtle visual differences: with the help of key part detection

We want to detect and show the key (most discriminative) parts in fine-grained images of different classes to give more insightful understanding of the critical property in objects, which may help us in feature design for fine-grained images. Note that we only have image labels in training images. In order to find the key parts in a class, we need to propagate the training image labels to parts in objects. Label propagation is also used in [30] on their feature representation to compute the object confidence map in general image recognition.

Suppose we want to interpret how yellow-throated vireo is different from black-capped vireo (illustrated in Fig. 2), we consider a binary classification problem where yellow-throated vireo and black-capped vireo are the positive and negative classes, respectively. We will compute a score for each part to denote its importance in this binary classification. A part with the largest score means that it is essential for yellow-throated vireo, and a part with the smallest score (i.e., the most negative score) is key to black-capped vireo.

We learn a max-margin binary classifier in each selected part cluster to compute the part score. This classifier is used to propagate the image labels to parts. In the training phase, for each selected part cluster, we aggregate the part features in one image assigned into this cluster altogether (similar to VLAD). The aggregated features of training images are normalized and then used to train a classifier with image labels. In the testing phase, given a part, its score is computed as the dot-product between the classifier in the part cluster it falls in (only consider those parts in the selected part clusters) and its feature (the CNN activation vector). In both training and testing processes, the part features are centered (i.e., minus the cluster center in each part cluster).

4 Experiments

In this section, we evaluate the proposed annotation-free method on fine-grained categorization. The selective search method [24] with default parameters is used to generate object proposals for each image. The pre-learned CNN models [4] from ImageNet are used to extract feature from each object proposal as [11], which has been shown to achieve state-of-the-art results. It is fine-tuned with training images and their labels. But we do not fine tune CNN on object proposals because many of them are from background clutters, which may deteriorate the CNN performance. We use the ‘imagenet-vgg-m’ model [4] on object proposals, given that its efficiency and accuracy are both satisfactory.

The part proposals in each scale group are assigned into 128 clusters. Each part feature is reduced into 128 dimensions by PCA. All 13 part scales ( in the CNN model) are divided into 8 scale groups: the first 4 scales form the first 4 groups, the subsequent 6 scales form 3 groups with 2 scales in one group, and the last 3 scales form the last scale group. This arrangement make the number of parts in each group roughly balanced. The dimension of the final image representation using FV is: , from which different fractions of useful part clusters will be selected and evaluated.

We evaluate the proposed method on two benchmark fine-grained datasets:

  • CUB200-2011 [26]: The Caltech-UCSD Birds 200-2011 dataset contains 200 different bird classes. It includes 5994 training images and 5794 testing images in total.

  • StanfordDogs [16]: This dataset contains 120 different types of dogs and includes 20,580 images in total.

For both datasets, we only use the class labels of images in the training stage.

We choose LIBLINEAR [7] to learn a linear SVM classifier for classification. All the experiments are run on a computer with Intel i7-3930K CPU and 64G main memory.

4.1 Influences of different modules

We investigate the effect of different modules in the proposed image representation on the CUB 200-2011 dataset in Table 1.

Accuracy (%)
conv5+MMP+ScPM 71.04
conv5+MMP 68.78
conv5 58.15
Table 1: Evaluation of different modules in the proposed image representation.

First, we consider the effect of MMP in the proposed image representation. We compare the part proposals generated using the outputs of conv5 and conv5+MMP. All part proposals are encoded into one FV in each image (not using ScPM). It can be seen that multi-scale part proposals (conv5+MMP) can greatly improve the recognition accuracy over single-scale part proposals (conv5) by . This is because MMP can provide very dense coverage of object parts on different scales.

Second, we evaluate the influence of ScMP in the proposed image representation. Using the multi-scale part proposals generated by MMP, ScMP has a better accuracy (2.26% higher) than that of the method encoding all part proposals altogether. This shows that it is beneficial to encode objects at different scales separately.

Up to now, MMP+ScMP has shown better accuracy than the state-of-the-art annotation-free fine-grained categorization method [28] by . Next, we are going to further improve the accuracy with part selection on this representation.

4.2 Part selection

We show the classification accuracy using part selection on the proposed image representation (MMP+ScMP) for CUB 200-2011 in Table 2.

Without annotations in neither training nor testing
Methods Selection fraction Acc. (%)
Proposed 100% (All) 71.04
75.0% (3/4) 71.67
50.0% (1/2) 73.34
25.0% (1/4) 75.02
12.5% (1/8) 73.82
Two-level attention [28] 69.70
Use annotations in training, not in testing
DPD+DeCAF [6] 44.94
Part based R-CNN (without parts) [32] 52.38
Part based R-CNN-ft (without parts) [32] 62.75
CL-45C (without parts) [18] 73.50
Part based R-CNN-ft (with parts) [32] 73.89
Pose Normalized CNN [3] 75.70
Table 2: Classification accuracy on Caltech-UCSD Birds 200-2011.

Part selection can greatly improve the accuracy. The accuracy is shown when different fractions of part clusters are selected in the image representation. When a quarter of most important part clusters (fraction 25%) are used, a peak is reached, and it is better than that without part selection (fraction 100%) by . Even when fewer part proposals are selected (fraction 12.5%), its accuracy is still better than that without part selection by . This shows that part selection can efficiently resist the noise introduced by those part proposals from background clutters.

Our best accuracy (75.02%) outperforms the state-of-the-art annotation-free method [28] by , and is also better than most existing annotation-dependent works. [28] claims better results can be acquired if more powerful CNN is used. For fair comparison, we cite their results using the standard CNN structure (containing 5 convolutional layers). We only show the accuracy of annotation-dependent methods using object / part annotations in the training stage, which uses the least annotations and is most close to our annotation-free setup. Most of these methods try to learn expensive part detectors to get accurate matching for recognition. However, our method shows that they are not always necessary, especially in annotation-free fine-grained categorization.

Part selection is more important in fine-grained categorization than (feature) selection in general image categorization. With part selection, the accuracy is

higher than the original image representation. In [36], feature selection is used to compress FV for general image recognition like object recognition. Much smaller (around 1%) improvement after selection (worse in most time) is achieved over original FV, which is much less than the improvement in Table 2. This fact clearly shows the distinction between these two applications. In annotation-free fine-grained tasks, selecting proper object parts are critical, while in general image recognition the global image representation without selection is already good.

We show the categorization accuracy for Stanford Dogs in Table 3. The proposed method (either with or without part selection) shows much better accuracy than existing annotation-dependent works. Part selection also shows to play an important role in the proposed image representation, which leads to a improvement over the original representation. Stanford Dogs is a subset in ImageNet. It is also evaluated in [28], which gets worse result than ours.

Without annotations in neither training nor testing
Methods Selection fraction Acc. (%)
Proposed 100% (All) 77.23
75.0% (3/4) 78.28
50.0% (1/2) 79.36
25.0% (1/4) 79.92
12.5% (1/8) 78.18
Two-level attention [28] 71.90
Use annotations in both training and testing
Edge templates [29] 38.00
Unsupervised alignments [10] 50.10
MTL [21] 39.30
Table 3: Classification accuracy on StanfordDogs.

Overall, these results show that: 1) part selection is important in annotation-free fine-grained categorization; 2) it is not always necessary to learn expensive object / part detectors in fine-grained categorization.

4.3 Key part visualization

We detect and visualize the key parts for pairwise classes using the proposed image / part representation in Fig. 5. In each pair, we show one sample image and 20 detected key parts with highest (smallest) scores for the positive (negative) class. The bird names are given in the captions, which also shows how humans distinguish the two birds.

The detected parts can capture the key parts in these species, which coincides well with the human-defined rules. We also find that the proposed method can capture some tiny distinction that we cannot easily discriminate by eyes. For example, in the first pair, the key parts in the red-bellied woodpecker and red-headed woodpecker are both red, and the locations are very close. From the detected parts, we can find that the red color of red-headed woodpecker is darker and the feather of red-bellied woodpecker is finer.

(a) Red-bellied Woodpecker vs. Red-headed Woodpecker
(b) Red-winged Blackbird vs. Yellow-headed Blackbird
(c) Blue Jay vs. Green Jay
Figure 5: Key (most discriminative) parts visualization for pairwise classes. Key parts are detected from testing images using the classifier learned from training images. Top 20 key parts are shown for each class. The important parts found by the proposed method coincide well with the rules human experts use to distinguish these birds. This figure is best viewed in color.

From the detected parts, we can see the necessity to select many useful parts in the proposed image representation. One (best) part may cause possible loss of useful information in characterizing an object. Multiple (good) parts can compensate each other from different aspects like location, view, and scale, etc. This also explains why the proposed representation works better than [28], which only use detected best part for categorization.

5 Conclusions

In this paper, we propose to categorize fine-grained images without using any object / part annotation neither in the training nor in the testing stage. Our basic idea is to select multiple useful parts from multi-scale part proposals and use them to compute a global image representation for categorization. This is specially designed for fine-grained categorization in this annotation-free scenario, because parts have shown to play an important role in existing annotation-dependent works and accurate part detectors are hardly acquired. Particularly, we propose an efficient multi-max pooling strategy to generate multi-scale part proposals by using the internal outputs of CNN on object proposals in each image. Then, we select useful parts from those part clusters which are important for categorization. Finally, we encode the selected parts on different scales separately in a global image representation. With the proposed image / part representation technique, we use it to detect the key parts in objects of different classes, whose visualization results are intuitive and coincide well with rules used by human experts.

In the experiments, on two challenging datasets (the CUB 200-2011 and the StanfordDogs datasets), our proposed annotation-free method achieves classification accuracy of 75.02% and 79.92% respectively, which is better than the results of state-of-the-art annotation-free work and most existing annotation-dependent methods. Future works include utilizing the objection information mined from the global image representation to help localize objects and further improve classification.


  • [1] T. Berg, J. Liu, S. W. Lee, M. L. Alexander, D. W. Jacobs, and P. N. Belhumeur. Birdsnap: Large-scale fine-grained visual categorization of birds. In CVPR, pages 2019 – 2026, 2014.
  • [2] L. Bourdev, S. Maji, T. Brox, and J. Malik. Detecting people using mutually consistent poselet activations. In ECCV, volume LNCS 6316, pages 168–181, 2010.
  • [3] S. Branson, G. V. Horn, S. Belongie, and P. Perona. Bird species categorization using pose normalized deep convolutional nets. In BMVC, 2014.
  • [4] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In BMVC, 2014.
  • [5] J. Deng, J. Krause, and L. Fei-Fei. Fine-grained crowdsourcing for fine-grained recognition. In CVPR, pages 580 – 587, 2013.
  • [6] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, pages 647–655, 2014.
  • [7] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. JMLR, 9:1871–1874, Aug 2008.
  • [8] R. Farrell, O. Oza, N. Zhang, V. I. Morariu, T. Darrell, and L. S. Davis. Birdlets: Subordinate categorization using volumetric primitives and pose-normalized appearance. In ICCV, pages 161–168, 2011.
  • [9] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. TPAMI, 32:1627–1645, 2010.
  • [10] E. Gavves, B. Fernando, C. Snoek, A. Smeulders, and T. Tuytelaars. Fine-grained categorization by alignments. In ICCV, pages 1713–1720, 2013.
  • [11] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, pages 580–587, 2014.
  • [12] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activations features. In ECCV, volume LNCS 8695, pages 392–407, 2014.
  • [13] C. Goring, E. Rodner, A. Freytag, and J. Denzler. Nonparametric part transfer for fine-grained recognition. In CVPR, pages 2489–2496, 2014.
  • [14] H. Jegou, F. Perronnin, M. Douze, J. Sanchez, P. Perez, and C. Schmid. Aggregating local image descriptors into compact codes. TPAMI, 34(9):1704–1716, 2012.
  • [15] Y. Jia, C. Huang, and T. Darrell. Beyond spatial pyramids: Receptive field learning for pooled image features. In CVPR, pages 3370–3377, 2012.
  • [16] A. Khosla, N. Jayadevaprakash, B. Yao, and L. Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, CVPR, 2011.
  • [17] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006.
  • [18] L. Liu, C. Shen, and A. van den Hengel. The treasure beneath convolutional layers: Cross-convolutional-layer pooling for image classification. In CVPR, 2015.
  • [19] M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In ICVGIP, pages 722–729, 2008.
  • [20] W. Ouyang, P. Luo, X. Zeng, S. Qiu, Y. Tian, H. Li, S. Yang, Z. Wang, Y. Xiong, C. Qian, Z. Zhu, R. Wang, C.-C. Loy, X. Wang, and X. Tang.

    DeepID-Net: multi-stage and deformable deep convolutional neural networks for object detection.

    In CVPR, 2015.
  • [21] J. Pu, Y.-G. Jiang, J. Wang, and X. Xue. Which looks like which: Exploring inter-class relationships in fine-grained visual categorization. In ECCV, volume LNCS 8691, pages 425–440, 2014.
  • [22] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the Fisher vector: Theory and practice. IJCV, 105(3):222–245, 2013.
  • [23] A. R. Sfar, N. Boujemaa, and D. Geman. Vantage feature frames for fine-grained categorization. In CVPR, pages 835–842, 2013.
  • [24] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV, 104:154–171, 2013.
  • [25] A. Vedaldi, S. Mahendran, S. Tsogkas, S. Maji, B. Girshick, J. Kannala, E. Rahtu, I. Kokkinos, M. B. Blaschko, D. Weiss, B. Taskar, K. Simonyan, N. Saphra, and S. Mohamed. Understanding objects in detail with fine-grained attributes. In CVPR, pages 3622–3629, 2014.
  • [26] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
  • [27] C. Wah, G. V. Horn, S. Branson, S. Maji, P. Perona, and S. Belongie. Similarity comparisons for interactive fine-grained categorization. In CVPR, pages 859–866, 2014.
  • [28] T. Xiao, Y. Xu, K. Yang, J. Zhang, Y. Peng, and Z. Zhang.

    The application of two-level attention models in deep convolutional neural network for fine-grained image classification.

    In CVPR, 2015.
  • [29] S. Yang, L. Bo, J. Wang, and L. Shapiro. Unsupervised template learning for fine-grained object recognition. In NIPS 26, pages 3131–3139, 2012.
  • [30] D. Yoo, S. Park, J.-Y. Lee, and I. S. Kweon. Fisher kernel for deep neural activations. arXiv:1412.1628v2, 2014.
  • [31] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, volume LNCS 8689, pages 818–833, 2014.
  • [32] N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-based R-CNNs for fine-grained category detection. In ECCV, volume LNCS 8689, pages 834–849, 2014.
  • [33] N. Zhang, R. Farrell, and T. Darrell. Pose pooling kernels for sub-category recognition. In CVPR, pages 3665–3672, 2012.
  • [34] N. Zhang, R. Farrell, F. Iandola, and T. Darrell. Deformable part descriptors for fine-grained recognition and attribute prediction. In CVPR, pages 729–736, 2013.
  • [35] N. Zhang, M. Paluri, M. Ranzato, T. Darrell, and L. Bourdev. PANDA: Pose aligned networks for deep attribute modeling. In CVPR, pages 1637–1644, 2014.
  • [36] Y. Zhang, J. Wu, and J. Cai. Compact representation for image classification: To choose or to compress? In CVPR, pages 907–914, 2014.