Compare More Nuanced:Pairwise Alignment Bilinear Network For Few-shot Fine-grained Learning

04/07/2019 ∙ by Huaxi Huang, et al. ∙ 4

The recognition ability of human beings is developed in a progressive way. Usually, children learn to discriminate various objects from coarse to fine-grained with limited supervision. Inspired by this learning process, we propose a simple yet effective model for the Few-Shot Fine-Grained (FSFG) recognition, which tries to tackle the challenging fine-grained recognition task using meta-learning. The proposed method, named Pairwise Alignment Bilinear Network (PABN), is an end-to-end deep neural network. Unlike traditional deep bilinear networks for fine-grained classification, which adopt the self-bilinear pooling to capture the subtle features of images, the proposed model uses a novel pairwise bilinear pooling to compare the nuanced differences between base images and query images for learning a deep distance metric. In order to match base image features with query image features, we design feature alignment losses before the proposed pairwise bilinear pooling. Experiment results on four fine-grained classification datasets and one generic few-shot dataset demonstrate that the proposed model outperforms both the state-ofthe-art few-shot fine-grained and general few-shot methods.



There are no comments yet.


page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Fine-grained image classification aims at distinguish different sub-categories belong to the same entry-level category [1, 2, 3, 4]. This task is particularly challenging due to the low inter-category variation yet high intra-category discordance caused by the various objects posture, illumination condition and distance from the camera etc. Compared to the part based fine-grained methods [5, 6], global feature based fine-grained [7, 8, 9] approaches achieve the state-of-the-art recognition performance. In addition, self-bilinear models are the most widely used approaches [7, 8, 9]

. The majority of fine-grained recognition approaches need to be fed with a large amount of training data before obtaining a decent classifier 

[5, 6, 7, 8, 9, 10]. However, labelling the fine-grained data requires strong domain knowledge, e.g., only ornithologists can accurately identify different birds, which is significantly expensive compared to generic object recognition tasks. Moreover, in some fine-grained datasets  [11, 12], the amount of the well-labelled training samples is limited, e.g., it is hard to collect large-scale samples of endangered species. Therefore, how to tackle the fine-grained image recognition with less training data is still an open problem.

Figure 1: An example of general one-shot learning (Left) and fine-grained one-shot learning (Right). For general one-shot learning, it is easy to learn the concepts of objects with only a single image. However, it is difficult to distinguish the sub-classes of specific categories with one sample.

Figure 2: The framework of PABN under the one-shot fine-grained image recognition setting. There are three parts of PABN: Encoder, Fine-grained Features Extractor, and Comparator. Encoder extracts coarse features from raw images. Fine-grained Extractor captures the subtle features further. Comparator produces the final classification results.

Machine few-shot learning is first proposed by Li et al. [13] based on the Bayesian theory. Recently, due to the excellent performance of deep neural networks, machine few-shot learning [14, 15, 16, 17] revives again and achieves significant improvements against previous methods. In the cognition process of human beings, preschoolers can easily distinguish the difference between ‘Dog’ and ‘Horse’ after seeing few samples. However, they may be confused about ‘Husky Dogs’ and ‘Alaskan Dogs’ with only limited samples. This can be caused by the underdeveloped ability of children to process information compared to adults, which indicates that general few-shot methods cannot cope with the fine-grained recognition task well. To this end, in this paper, we focus on dealing with the FSFG classification in a ‘developed’ way.

Few-Shot Fine-Grained recognition (FSFG) task is recently introduced by Wei et al. [18]. Two sub-networks are employed to jointly tackle this problem. The first is a self-bilinear encoder network, which adepts the matrix outer product operation on convolved features to capture subtle image features, while the second one is a mapping network that learns the decision boundaries of the input data. Using the meta-learning strategy on the auxiliary dataset, their model can classify different samples in the testing dataset with few labeled samples, i.e., few shots.

Compared to the generic image classification, they use self-bilinear pooling to extract more informative image representations. However, for the unseen or new categories, the data distribution could be different from the training data, which means the trained self-bilinear feature extractor may fail in distinguishing these classes. It would be better to learn the relation or discrimination between different categories with extracting subtle features at the same time. To solve this problem, in this paper, we propose a pairwise bilinear pooling operation between base and query images to extract fine-grained features. Meanwhile, their relations are explored by a non-linear comparator.

Most recently, Pahde et al. [19] propose a cross-model FSFL method, which embeds the textual annotations and image features into a common latent space. They also introduce a discriminative text-conditional GAN for the sample generation, which selects the representative samples from the auxiliary data. However, it is both computation and time consuming to obtain rich annotations for the fine-grained samples, which we try to avoid. Yao et al. [20] propose a one-shot fine-grained retrieval method, which employs the Convolutional and Normalization Networks. Different from their method, our model focuses on the image recognition task rather than retrieval.

There are also series of related works of meta-learning based few-shot recognition methods [14, 15, 16], among which the Ration-Net [16]

achieves the state-of-the-art performance by combining a non-linear feature encoder and a relation comparator. However, the feature extraction in Ration-Net only concatenates the base and query feature maps in the depth dimension, and cannot capture nuanced features for the fine-grained classification.

To overcome the shortcomings of underdeveloped feature extraction in [16] and naive self-bilinear pooling in [18], we propose a novel end-to-end FSFG framework that captures the fine-grained relations among different classes. This nuanced compare ability of our models is inherently more intelligent than simply modeling the data distribution. The whole framework is shown in Figure 2. More specifically, the base images and a query image are fed into the PABN simultaneously in a paired manner, followed by the encoder network to generate embedded pair features. Then a pairwise bilinear pooling operation is used to extract the subtle features from these pairs. For each pair, the proposed feature alignment losses are adopted to guarantee that the positions of base image features match the query ones. Finally, pairwise bilinear features pass through the non-linear comparator, which classifies the query image into its corresponding category. In summary, the main contributions of this work are as follows:

  • We proposes a novel FSFG model, which mimics the advanced learning process of human beings. We propose a new pairwise bilinear pooling operation to capture the subtle differences between the base and query images.

  • In order to acquire the accurate pairwise bilinear features, we adopt the alignment losses to regularize the embedding features.

  • The proposed method achieves the state-of-the-art performances compared to the general few-shot learning and FSFG methods.

2 Methodology

2.1 Problem Definition

Given a fine-grained target dataset


For the FSFG task, the target dataset contains two parts: the labeled subset and the unlabeled subset . The model needs to classify the unlabeled data from ( represents the raw image) according to the few labeled data from (where denotes the image and is the label of this image). If the labeled data in the target dataset contains labeled images for each of different categories, this problem called -way--shot problem.

In order to get a perfect model that can identify the unlabeled images from the target dataset . Few-shot learning usually employs a fully annotated dataset which has similar property or data distribution with as the auxiliary dataset :


Where and represent images, and represent image labels. In each round of training, the auxiliary dataset is randomly separated into two parts: Support dataset and Query dataset . With setting , we can simulate the composition of target dataset in each iteration. Then is used to learn a meta-learner which can transfer the knowledge from to target data . Once the meta-learner is trained, it could be fine-tuned using labeled target dataset . Finally, the meta-learner could classify the samples from the unlabeled data into their corresponding categories. This training setting that mimics the few-shot setting of target problem is widely used in meta-learner training [14, 16, 18].

2.2 Framework

The whole framework of PABN is shown in Figure 2. Different from traditional few-shot embedding structures [14, 15, 16], we add the fine-grained image feature extractors as shown in the dotted line box which is our main contribution. In addition, we modify the non-linear comparator [16] and apply it to our fine-grained task. Fine-grained features extractor can be divided into two structures: alignment loss regularization and pair-wise bilinear pooling layer. The former aims to match the features of the same position in the embedded images features. For example, the features of the bird’s head in the target dataset should match the query bird’s head features from . The latter pairwise bilinear pooling layer is designed to extract the second-order comparative features from pairs of base images (like samples from ) and query images (like samples from ).

Pairwise bilinear pooling layer is the core component of PABN model which captures the nuanced comparative features of image pairs and therefore decides the relations between base and query images which is crucial to the classifier. However, if the pair of the images are not well matched, this pairwise bilinear pooled features cannot result in the maximum classification performance gain. Thus we propose two feature alignment losses to guarantee the registration between pairs of images. In next section, we will firstly introduce the pairwise bilinear pooling layer, then we will present the feature alignment regularization with two alignment losses.

2.3 Pairwise Bilinear Pooling Layer

Original Bilinear CNN images recognition can be defined as a quadruple:


and are two encoders. is the self-bilinear pooling and represents a classifier. is a image that has height, width and color channels. Through encoder

, the input image is transformed into a tensor

which has feature channels and indicate the hight and width of the embedded feature map. Given two specific functions and . and

denote feature vectors at specific location in each feature matrix

and with . The pooled feature is a vector. is a fully-connected layer with the cross-entropy training loss between self-bilinear feature and image label.

The self-bilinear operates on pairs of embedded features from the same image. However, in our pairwise bilinear pooling, given a pair of image (e.g., ) and image (e.g., ), an encoder , pairwise bilinear pooling can be defined as


After obtaining this pairwise bilinear vectors, a sigmoid activation is used to generate the relation scores of the compared pairs. The relation scores are then passed to the final comparator.

Note that in our pairwise bilinear pooling, we only have one shared embedding functions . Different from the self-bilinear pooling that operates on the same input image, pairwise bilinear pooling uses matrix outer product on two disparate samples. The training loss in our bilinear comparator is mean square error (MSE) loss which regresses the relation score to the images label similarity as discussed in [16]. In this way, we can capture the fine-grained second-order comparative features in a pair-wise manner.

2.4 Feature Alignment Loss

In Equation 3, self-bilinear pooling operates on the same image which means in any location of the embedded features map, the operates features should be aligned. However, our proposed pairwise bilinear pooling conducts on different samples, thus the encoded features may not always matched. In order to overcome this problem, we design two feature alignment losses as follows:


the first loss is a rough approximation of two embedded image descriptors which minimzing the Euclidean distances of all elements of two features.


the second loss is a more concise feature alignment loss , where we sum all the raw features along the third-channel first and then measures the MSE of summed features as Equation 6 indicates.

By training with the proposed alignment losses, we encourage the network to automatically learn the matching features to generate a better pairwise bilinear feature.

3 Experiment

In this section, we evaluate the proposed PABN on four widely used fine-grained datasets and one generic few-shot dataset. First, we will give a brief introduction to these datasets. Then we introduce the experiment setup in detail. Finally, we analyze the experimental results of the proposed models and compare with other few-shot learning approaches.

3.1 Dataset

In our experiments, we utilize five datsets to investigate the proposed models:

  • CUB Birds [1] contains 200 categories of birds and totally 11,788 images.

  • DOGS [2] contains 120 categories of dogs and totally 20,580 images.

  • CARS [3] contains 196 categories of cars and totally 16,185 images.

  • NABirds [4] contains 555 categories of north American birds and totally 48,562 images.

  • MiniImageNet [14] consists 100 categories of 60,000 images. Each class has 600 examples.

Dataset CUB Birds DOGS CARS NABirds
200 120 196 555
150 90 147 416
50 30 49 139
Table 1: The class split for four fine-grained datasets. is the original number of categories in the datasets, is the number of categories in separated auxiliary datasets and is the number of categories in target datasets.

In Section 2.1, we randomly divide these datasets into two disjoint sub-datasets: the auxiliary dataset and the target dataset as shown in Table 1. For CUB Birds, DOGS and CARS datasets, we follow Wei’s [18] separation. For MiniImageNet, we followed the separation of [16] which adopts 64, 16, and 20 classes as training set, validation set and testing set, respectively. Notice that the validation set is only used for monitoring the generalisation of performance.

3.2 Experimental Setup

Methods CUB Birds CARS DOGS NABirds
1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot
PCM [18] 42.101.96 62.481.21 29.632.38 52.281.46 28.782.33 46.922.00 - -
Ration-Net 63.771.37 74.920.69 56.280.45 68.390.21 51.950.46 64.910.24 65.170.47 78.350.21
PABN 65.991.35 76.900.21 55.650.42 67.290.23 54.770.44 65.920.23 67.230.42 79.250.20
PABN 65.040.44 76.460.22 55.890.42 68.530.23 54.060.45 65.930.24 66.620.44 79.310.22
PABN 66.710.43 76.810.21 56.800.45 68.780.22 55.470.46 66.650.23 67.020.43 79.020.21
Table 2: Few-shot classification accuracy (%) comparison on four fine-grained datasets. The highest-accuracy methods are highlighted. The second highest-accuracy methods are labeled with the underline. ‘-’ denotes not reported. All results are with confidence intervals where reported.

In each round of training and testing, for one-shot image recognition, the base sample number in each class equals 1 (in both and , ). Therefore we use the embedded features of these base sample as the classes’ features (). For few-shot image recognition, we extract the classes’ features by summing all the embedded features in each category. We compared four variations of the proposed PABN models: PABN, PABN and PABN. PABN represents the model that does not use alignment loss on embedded pair features. PABN, and PABN are the models which adopt the alignment loss and separately in alignment layer. After pairwise bilinear pooling, we conduct normalization operation on pairwise bilinear features as [7] did.

In all our PABN models and Rational Network, we conduct 5-way-1-shot and 5-way-5-shot settings. Both of 5-way-1-shot and 5-way-5-shot experiments have 15 query images which means there are images and images separately for 5-way-1-shot and 5-way-5-shot in each mini-batches. We resize all the input images from all datasets to . All experiments use Adam optimize method with initial learning rate 0.001 and all models are trained end-to-end from scratch. We initialize all networks randomly without involving additional datasets.

Methods MiniImageNet 5-way
1-shot 5-shot
Ration-Net [16] 50.44 0.82% 65.32 0.70%
PABN 51.87 0.45% 64.95 0.71%
PABN 50.55 0.44% 64.80 0.75%
PABN 50.94 0.43% 65.37 0.68%
Table 3: Experiments results on MiniImageNet dataset. The highest-accuracy methods are highlighted and the second highest-accuracy methods are labeled with the underline. With confidence intervals.

3.3 Results and Analysis

To the best of our knowledge, there are few methods proposed for Few-shot Fine-grained image recognition [18, 19, 20]. [19] uses larger auxiliary dataset than our methods and [20]

is only applied for image retrieval task. It is unfair to compare with these methods directly. Therefore we compare our PABN with Piecewise Classifier Mapping (PCM)

[18] which is the first FSFG method. Moreover, we also compare our methods with the state-of-the-art generic few-shot learning method Ration-Net [16]. Original Rational Network does not report the results on four fine-grained datasets under the few-shot setting. We use the open source code of Rational Network to conduct the FSFG image recognition on these datasets.

We show the experimental results of five compared models in Table 2. As we can see, the proposed PABN models achieve siginificant improvements on both 1-shot and 5-shot recognition tasks on four fine-grained datasets compared to the state-of-the-art FSFG method and the state-of-the-art generic few-shot method which indicates the effectiveness of proposed framework. In addition, PABN models and Ration-Net obtain around 10 to 20 percent higher in recognition accuracy than PCM which demonstrates that a leaned non-linear comparator outperforms a plain linear classifier.

Specifically, without feature alignment, PABN achieves higher averaged accuracies than Ration-Net on CUB Birds, CARS and DOGS, except on CARS data that is nearly lower in accuracy than Ration-Net. Nevertheless, by adding alignment layer with two alignment losses, PABN and PABN obtain higher classify accuracies for 1-shot and 5-shot on CARS separately which indicates that well-matched pairwise bilinear features can produce better recognition performance for FSFG tasks. It can be observed that PABN achieves the best or second best classification performance on almost datasets compared to PABN under different experimental settings. This indicates that a more precise feature alignment can result in a better performance of pairwise bilinear pooling.

For a further analysis of our models, we conduct an additional experiment on MiniImageNet [14] dataset which is a standard generic few-shot learning dataset. Form Table 3

, it can be observed that our PABN models achieve higher performance than Rational Network. In detail, in 1-shot recognition scene, all the PABN models outperform Rational Network with higher accuracy and lower standard deviations. As for the 5-shot setting, PABN

achieves the state-of-the-art performance where other models are slightly lower in accuracies than Rational Network. Moreover, PABN achieves the best classification performance in 5-shot learning and second best classification performance in 1-shot learning. That also demonstrates that a concise matching of compared features can further improve the performance.

4 Conclusion

In this paper, we propose a novel few-shot fine-grained image recognition method which is inspired by the advanced information processing ability of human beings. The main contribution is the pairwise bilinear pooling, which extracts the second-order comparative features for the pair of base images and query images. Moreove, in order to get a more precise comparative feature, we propose two feature alignment losses to match the embedded base image features with query image features. Through comprehensive experiments on five widely used datasets, we verify the effectiveness of the proposed method. In our future work, we would like to design a more accurate feature matching model as Section 3.3 discussed.


  • [1] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The Caltech-UCSD Birds-200-2011 Dataset,” Tech. Rep. CNS-TR-2011-001, California Institute of Technology, 2011.
  • [2] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei, “Novel dataset for fine-grained image categorization,” in First Workshop on Fine-Grained Visual Categorization, CVPR, Colorado Springs, CO, June 2011.
  • [3] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei, “3d object representations for fine-grained categorization,” in 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
  • [4] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber, Jessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Belongie, “Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection,” in CVPR, June 2015.
  • [5] Ning Zhang, Jeff Donahue, Ross Girshick, and Trevor Darrell, “Part-based r-cnns for fine-grained category detection,” in ECCV. Springer, 2014, pp. 834–849.
  • [6] Jianlong Fu, Heliang Zheng, and Tao Mei,

    “Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition,”

    in CVPR, July 2017.
  • [7] Tsung-Yu Lin, Aruni RoyChowdhury, and Subhransu Maji, “Bilinear cnn models for fine-grained visual recognition,” in ICCV, December 2015.
  • [8] Yin Cui, Feng Zhou, Jiang Wang, Xiao Liu, Yuanqing Lin, and Serge Belongie, “Kernel pooling for convolutional neural networks,” in CVPR, July 2017.
  • [9] Peihua Li, Jiangtao Xie, Qilong Wang, and Zilin Gao, “Towards faster training of global covariance pooling networks by iterative matrix square root normalization,” in CVPR, June 2018.
  • [10] Jonathan Krause, Hailin Jin, Jianchao Yang, and Li Fei-Fei, “Fine-grained recognition without part annotations,” in CVPR, June 2015.
  • [11] Peiqin Zhuang, Yali Wang, and Yu Qiao, “Wildfish: A large benchmark for fish recognition in the wild,” in MM. ACM, 2018, pp. 1301–1309.
  • [12] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie, “The inaturalist species classification and detection dataset,” in CVPR, June 2018.
  • [13] Li Fei-Fei, Rob Fergus, and Pietro Perona, “One-shot learning of object categories,” TPAMI, vol. 28, no. 4, pp. 594–611, 2006.
  • [14] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al., “Matching networks for one shot learning,” in NIPS, 2016, pp. 3630–3638.
  • [15] Jake Snell, Kevin Swersky, and Richard Zemel, “Prototypical networks for few-shot learning,” in NIPS, 2017, pp. 4077–4087.
  • [16] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, and Timothy M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in CVPR, June 2018.
  • [17] Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, and Yi Yang, “Transductive propagation network for few-shot learning,” arXiv preprint arXiv:1805.10002, 2018.
  • [18] Xiu-Shen Wei, Peng Wang, Lingqiao Liu, Chunhua Shen, and Jianxin Wu, “Piecewise classifier mappings: Learning fine-grained learners for novel categories with few examples,” arXiv preprint arXiv:1805.04288, 2018.
  • [19] Frederik Pahde, Patrick Jähnichen, Tassilo Klein, and Moin Nabi, “Cross-modal hallucination for few-shot fine-grained recognition,” arXiv preprint arXiv:1806.05147, 2018.
  • [20] Hantao Yao, Shiliang Zhang, Yongdong Zhang, Jintao Li, and Qi Tian, “One-shot fine-grained instance retrieval,” in MM. ACM, 2017, pp. 342–350.