The rise of digital cameras and smart phones, the standardization of computers and multimedia formats, the ubiquity of data storage devices and the technological maturity of network infrastructure has exponentially increased the volumes of visual data available on-line and off-line. With this dramatic growth, the need for an effective and computationally efficient content search system has become increasingly important. Given a large collection of images and videos, the aim is to retrieve individual images and video shots depicting instances of a user-specified object (query). There are a range of important applications for image retrieval including management of multimedia content, mobile commerce, surveillance, augmented automotive navigation etc. Performing robust and accurate visual search is challenging due to factors such as changing object viewpoints, scale, partial occlusions, varying backgrounds and imaging conditions. Additionally, today’s systems must be highly scalable to accommodate the the huge volumes of multimedia data, which can comprise billions of images.
In order to overcome these challenges, a compact and discriminative image representation is required. Typically, this is achieved by the aggregation of multiple local descriptors from an image into a single high-dimensional global descriptor. The similarity of the visual content in two images is determined using a distance metric (e.g. Hamming or Euclidean distance) between their corresponding global descriptors. The retrieval is accomplished by calculating a ranking based on the distances between a set of images to a given query image.
This paper addresses the task of extracting a global descriptor by means of aggregating local deep descriptors. We achieve this using a novel combined CNN and Fisher Vector model that is learnt simultaneously. We also show our proposed model provides significant improvements in the retrieval accuracy when compared with related state-of-the-art approaches across different descriptor dimensionalities and datasets.
1.1 Related Work
One popular method for generating global descriptors for image matching is the Fisher Vector (FV) method, which aggregates local image descriptors (e.g. SIFT 
) based on the Fisher Kernel framework. A Gaussian Mixture Model (GMM) is used to model the distribution of local image descriptors, and the global descriptor for an image is obtained by computing and concatenating the gradients of the log-likelihoods with respect to the model parameters. One advantage of the FV approach is its encoding of higher order statistics, resulting in a more discriminative representation and hence better performance.
The FV model is learnt using unsupervised clustering, and therefore cannot make use of matching and non-matching labels that are available in image retrieval tasks. One way of overcoming this shortcoming was proposed by Perronnin et al. 
, where a fully connected neural network (NN) was trained by using the FV global descriptors as input. Here, the fisher-vector model was initially learnt in an unsupervised fashion on extracted SIFT features. The FV model then produces input feature vectors for the fully connected NN, which in turn is learnt in a supervised manner using backpropagation.
However, both the SIFT features and FV model in the above method are unsupervised. An alternative is to replace the low-level SIFT-features with deep convolutional descriptors obtained from convolutional neural networks (CNNs) trained on large-scale datasets such as ImageNet. Recent research has shown that image descriptors computed using deep CNNs achieve state-of-the-art performance for image retrieval and classification tasks. Babenko et al.
aggregated deep convolutional descriptors to form global image representations: FV, Temb and SPoC. The SPoC signature is obtained by sum-pooling of the deep features. Razavian et al.
compute an image representation by the max pooling aggregation of the last convolutional layer. The retrieval performance was further improved when the RVD-W method was used for aggregation of CNN-based deep descriptors.
All of the above approaches use fixed pre-trained CNNs. However, these CNNs are trained for the purpose of image classification (e.g. 1000 classes of Imagenet) and may perform sub-optimally in the task of image retrieval. To tackle this, Radenovic et al.  and Gordo et al. 
both proposed to use a Siamese CNN with max-pooling for aggregation. The CNN was fine-tuned on an image retrieval dataset. Two types of loss function were considered for optimisation: 1) the contrastive loss function and 2) the triplet loss function . Both were able to achieve significant improvements from existing retrieval mAP scores. However, both these approaches use max-pooling as an aggregation method. The work proposed in this paper improves on this by employing a Fisher Vector model for aggregation instead of max-pooling. We also consider an alternative method of sum-pooling and compare different aggregation methods on standard benchmarks.
1.2 Contributions and Overview
The main contribution of this paper is a Siamese deep net that aggregates CNN-based local descriptors using the Fisher Vector model. Importantly, we propose to learn the parameters of the CNN and Fisher vectors simultaneously using stochastic gradient descent on the contrastive loss function. This allows us to adjust the Fisher vector model to account for changes in the distribution of the underlying CNN features as they are learnt on image retrieval datasets. We also show that our proposed method improves on the retrieval performance of the following state-of-the-art approaches: Siamese CNN with max-pooling and Triplet loss with max-pooling . We show that our approach achieves mAP scores that equal or improve on state of the art results for the Oxford (81.5%) and Paris datasets (82.5%). Importantly, this was achieved without any segmentation of images used in . We also provide a new baseline of retrieval performance of our method when 1 million distractors are included into the test datasets.
2 Deep Fisher Vector Siamese Network
In this section, we describe the novel DNN that will learn a deep fisher vector representation by simultaneously learning the fisher-vector model components along with the underlying convolutional filter weights in a Siamese network. The overview diagram of the proposed deep Siamese Fisher Vector network is shown in Fig. 1.
Traditionally, a Siamese network consists of two parallel branches in the network, where both branches share the same convolutional weights. One branch is fed a query image and the other branch a reference image which propagate through the network yielding 2 global descriptors respectively, which can be compared using Euclidean distance. Our proposed Siamese network is different in that each branch consists of two components: a CNN for producing deep image descriptors that are then aggregated via a Fisher Vector layer to produce the final global descriptor.
2.1 CNN-based Deep Descriptors
Suppose the input image is given as . In order to extract the deep convolutional descriptors from the CNN component, the input image is first passed through number of convolutional layers. Here, we use convolutional layers with the same structure as the VGG-16  network with the fully connected layers removed.
The CNN is effectively parameterised by the filter weights at each of its convolutional layers. We shall denote the collection of all the CNN filter weights as . Formally, the CNN component can then be described by the function , where is the final number of convolutional filters, each producing a convolutional image of size . We then treat the final layer as producing a set of number of deep convolutional features that are of dimension .
2.2 Fisher Vectors
In order to aggregate the deep convolutional features, we employ the method of Fisher Vectors. Firstly, let be the set of
-dimensional deep convolutional features extracted from an image. Let
be an image-independent probability density function which models the generative process of, where represents the parameters of .
A Gaussian Mixture model (GMM) , is used to model the distribution of the convolutional features, where:
We represent the parameters of the -component GMM by where are respectively the weight, mean vector and covariance matrix of Gaussian . The covariance matrix of each GMM component is assumed to be diagonal and is denoted by . The GMM assigns each descriptor to Gaussian with the soft assignment weight () given by the posteriori probability:
The GMM can be interpreted as a probabilistic visual vocabulary, where each Gaussian forms a visual word or cluster. The -dimensional derivative with respect to the mean of Gaussian is denoted by :
We denote the elements of of as . The final FV representation used, , of image is obtained by concatenating the gradients for all Gaussians and normalising, giving: , with , where . The dimensionality of is . Since the FV will be integrated into a Siamese-CNN, we shall henceforth refer to as “SIAM-FV” for SIAMese-CNN-based Fisher Vector.
2.3 Fisher Vector Partial Derivatives
In this section, the partial derivatives of the Fisher vector with respect to its underlying parameters () are given. These partial derivatives will be used for learning the proposed deep net. Firstly, we give the partial derivatives for the element () of for some cluster and dimension, :
are used for calculating the gradients of the cluster prior, cluster mean and cluster standard deviation in the FV model. Eq.6 is used to backpropagate errors to the filter weights in the CNN component. We find that the partial derivatives of the final normalised fisher vector elements all have the following form:
In order to obtain the exact partial derivative of with respect to a particular parameter, we substitute with this parameter, look up the corresponding equation in Eq. 3-6, and substitute it into Eq. 7 above.
3 Deep Learning of Fisher Vector Parameters
It is possible to learn the Fisher Vector GMM parameters using the EM algorithm on the deep convolutional features. However, this is an unsupervised method that does not make use of available labelling information. In order to tackle this shortcoming, we propose performing supervised learning of the GMM parameters. To this end, we treat the learning of the GMM parameters as part of learning process of a DNN.
For the purpose of learning, we are given a training dataset of pairs of images, each image with resolution . Each pair of training images is associated with a label, where 1 denotes matching images and 0 denotes non-matching images. We denote the training dataset as: , where and . The value of the labels of is 0 for matching examples and 1 for non-matching examples.
Next, we describe the contrastive loss  used for learning the proposed FV-CNN network. Firstly, Euclidean distance is used to measure the difference between two Fisher vectors: .
Now, let the CNN weights be and the set of all the Fisher Vector parameters . The loss function is defined as:
is the heuristically determined margin parameter.
In order to optimise the GMM and cluster weight parameters of the Fisher vector, , the partial derivatives of with respect to these respective parameters: are used. For conciseness, we will not write the arguments when referring to the distance function
. So, using the chain rule ongives:
The first backpropagated partial derivative determines the amount of error present in the Fisher vectors of matching or non-matching pairs. The partial derivatives allows us to adjust the FV model parameters and can similarly be derived using the chain rule, giving:
where and are the 2 input Fisher vectors to the distance function and the partial derivatives of and detailed in Section 2.3.
The parameters are then updated by adding the present value to their respective partial derivatives multiplied by the learning rate : .
Updating CNN Weights
The updating of the CNN weights is performed in a similar manner to the standard backpropagation, with the following difference: The gradients backpropagated from the contrastive loss and fisher layer is given by: from Eq. 9 and 10, with the partial derivatives (Eq. 6) inserted in place of . Since the CNN part is located below the Fisher vector layer, the above Fisher Vector gradients will then be propagated downwards to update the CNN weights .
For our experiments, the siamese network was learned on the Landmarks dataset used in . Testing was performed on two independent datasets: Paris  and Oxford Buildings  with the mean average precision score reported. To test large scale retrieval, these datasets are combined with 1 million Flickr images , forming the Oxford1M and Paris1M dataset respectively. We followed the standard evaluation procedure and crop the query images of Oxford and Paris dataset, with the provided bounding box. The PCA transformation matrix is trained on the independent dataset to remove any bias.
4.1 Network Details
For the CNN component, the convolutional layers and respective filter weights of the VGG-16 network 
was used. However, the max-pooling and ReLU layer at the final convolution layer was removed. 8 clusters was used for the Fisher vector GMM model, with their parameters initialised using the EM algorithm. This resulted in FV of dimensionality 4096. For retrieval purposes, we then perform PCA or LDA and whitening on the 4096-D Fisher vector, reducing dimensionalities to: 128D, 256D and 512D. In order to learn the PCA or LDA model, when the Oxford Buildings dataset is tested, the Paris dataset is used to build the PCA/LDA model, and vice versa. The contrastive loss margin parameter was set to
. We set the learning rate equal to 0.001, weight decay 0.0005 and momentum 0.5. Training is performed to at most 30 epochs.
|(a) Contrastive Loss||(b) mAP of Oxford Dataset||(c) mAP of Paris dataset.|
4.2 Mining Non-Matching Examples
There exists significantly more non-matching pairs compared with matching pairs. Therefore, exhaustive use of non-matching pairs will create a large imbalance in the number of matching and non-matching pairs used for training. In order to tackle this, only a subset of non-matching examples are selected via mining, which allows the selection of only “hard” examples used in . Here, for each matching pair of images used, 5 of the closest non-matching examples to the query are used to form the non-matching pairs.
In this paper, 2000 matching pairs from the Landmarks dataset are randomly chosen. For each matching pair, 5 closest non-matching examples are then chosen, forming a 5-tuple, consisting of the following: query example; matching example; 5 non-matching examples. This forms a training set of pairs. This set of 12K pairs will be re-mined after 2000 iterations. In total, each epoch in the training cycle consists of 6000 iterations.
In this section, we evaluate the different components of our system in terms of: retrieval performance of the SIAM-FV descriptor across different epochs; projection methods (PCA vs LDA); dimensionality of the SIAM-FV descriptor; and compare the performance to the latest state-of-the-art algorithms.
|(a) Oxford||(b) Paris|
The behaviour of the contrastive loss during learning is shown in Fig.2. It can be seen here that the initial 20 epochs give a large reduction in the loss value, and subsequent epochs producing only small further improvements in the loss function. In terms of the mAP results on the test datasets of Oxford and Paris, we find that the greatest improvement is obtained from the initial 5 epochs, with approximately 14-16% improvement in mAP scores. This can be seen in Fig. 2b) for the Oxford dataset and Fig. 2c for the Paris dataset. Examples of the retrieved images based on the SIAM-FV descriptor for the Oxford and Paris datasets can be seen in Fig. 5 and 6 respectively.
Projection Methods: PCA vs LDA
Fig. 3 shows the mAP results achieved by employing PCA and LDA for dimensionality reduction on the Oxford dataset. In , it was found that for max-pooling aggregation, LDA provided better performance at 80.0%, compared to PCA 76.1%. However, the converse was found for our SIAM-FV descriptor, which achieves 81.5% with PCA and 77.1% using LDA. This was also found to be the case when sum-pooling was used, with 79.5% for PCA vs 74.8% using LDA. Thus for the remaining experiments, we have employed PCA as our choice for dimensionality reduction.
Dimensionality of SIAM-FV
Figure. 4a,b, demonstrates the performance of SIAM-FV signature when reduced to different dimensionalities via PCA+Whitening. As expected, the best performance is obtained when the dimensionality is highest, at 512D for both Oxford and Paris datasets. Crucially, the proposed SIAM-FV has a mAP score that is approximately 2% higher than sum-pooling and 4% higher than max-pooling on the Oxford dataset across all dimensionalities 128D,256D and 512D. This gain in performance is similar for the Paris dataset, with the SIAM-FV method outperforming both sum-pooling and max-pooling across all dimensionalities.
Comparison with State-of-the-Art
This section compares the performance of the proposed method to the state-of-the-art algorithms. Table 1 summarises the results for medium footprint signatures (4k-512 dimensions). It can be seen that the proposed SIAM-FV representation outperforms most of the prior-art methods. On Paris dataset, the R-MAC representation provides marginally better performance. Note that R-MAC used region based pooling where deep features are max-pooled in several regions of an image using multi-scale grid.
Gordo et al.  achieved 83.1% on Oxford dataset. However they employed a region proposal network and extracted MAC signatures from 256 regions in an image, significantly increasing the extracting complexity of the representation.
We now focus on a comparison of compact representations which are practicable in large-scale retrieval, as presented in Table 2. The dimensionality of the SIAM-FV descriptor is reduced from 4096 to 128 via PCA. The results show that our method outperforms all presented methods. On the large dataset of Oxford1M SIAM-FV provides a gain of +2.4% compared to latest MAC* signature.
|Ng et al ||128||59.3||-||-||59.0||-|
In this paper, we have proposed a robust and discriminative image representation by aggregating deep descriptors using Fisher vectors. We have also proposed a novel learning method that allows us to simultaneously fine-tunes the deep descriptors and adapt the Fisher vector GMM model parameters accordingly. This effectively allows us to perform supervised learning of the Fisher vector model using matching and non-matching labels by optimising the contrastive loss. The result is a CNN-based Fisher vector (SIAM-FV) global descriptor. We have also found that PCA was a more suitable dimensionality reduction method compared with LDA when used with the SIAM-FV representation. We have shown that this model produces significant improvements in the retrieval mean average precision scores. On the large scale datasets, Oxford1M and Paris1M, SIAM-FV representation achieves a mAP of 62.5% and 63.2%, all yielding superior performance to the state-of-the-art.
Appendix A Partial Derivatives of
We find that the partial derivatives of with respect to and both have the same form. So, let be either , or . Also, let the numerator of be denoted as and its denominator , so , then:
-  R. Arandjelović, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. NetVLAD: CNN architecture for weakly supervised place recognition. In , 2016.
-  A. Babenko and V. S. Lempitsky. Aggregating deep convolutional features for image retrieval. CoRR, 2015.
-  M. Bober, S. Husain, S. Paschalakis, and K. Wnukowicz. Improvements to TM6.0 with a robust visual descriptor – proposal from University of Surrey and Visual Atoms. In MPEG Standardisation contribution : ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO, M30311, jul 2013.
-  A. Gordo, J. Almazán, J. Revaud, and D. Larlus. Deep Image Retrieval: Learning Global Representations for Image Search, pages 241–257. IEEE Computer Society, 2016.
-  R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In Proc. of CVPR 2006, CVPR ’06, pages 1735–1742, Washington, DC, USA, 2006. IEEE Computer Society.
-  S. S. Husain and M. Bober. Improving large-scale image retrieval through robust aggregation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016.
-  H. Jégou and A. Zisserman. Triangulation embedding and democratic aggregation for image search. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
-  Y. Kalantidis, C. Mellina, and S. Osindero. Cross-dimensional weighting for aggregated deep convolutional features. CoRR, 2015.
-  D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, pages 91–110, 2004.
-  J. Y. H. Ng, F. Yang, and L. S. Davis. Exploiting local features from deep networks for image retrieval. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 53–61, 2015.
-  F. Perronnin and C. R. Dance. Fisher kernels on visual vocabularies for image categorization. In IEEE Conference on Computer Vision and Pattern Recognition, 2007.
-  F. Perronnin and D. Larlus. Fisher vectors meet neural networks: A hybrid classification architecture. In Proc. of CVPR, pages 3743–3752. IEEE Computer Society, 2015.
-  F. Perronnin, Y. Liu, J. Sanchez, and H. Poirier. Large-scale image retrieval with compressed fisher vectors. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3384–3391, 2010.
-  J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In IEEE Conference on Computer Vision and Pattern Recognition, 2007.
-  J. Philbin, M. Isard, J. Sivic, and A. Zisserman. Lost in quantization: Improving particular object retrieval in large scale image databases. In IEEE Conference on Computer Vision and Pattern Recognition, 2008.
-  F. Radenović, G. Tolias, and O. Chum. CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples, pages 3–20. IEEE Computer Society, 2016.
-  A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson. Visual instance retrieval with deep convolutional networks. CoRR, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
-  G. Tolias, R. Sicre, and H. Jégou. Particular object retrieval with integral max-pooling of CNN activations. CoRR, 2015.