1 Introduction
The rise of digital cameras and smart phones, the standardization of computers and multimedia formats, the ubiquity of data storage devices and the technological maturity of network infrastructure has exponentially increased the volumes of visual data available online and offline. With this dramatic growth, the need for an effective and computationally efficient content search system has become increasingly important. Given a large collection of images and videos, the aim is to retrieve individual images and video shots depicting instances of a userspecified object (query). There are a range of important applications for image retrieval including management of multimedia content, mobile commerce, surveillance, augmented automotive navigation etc. Performing robust and accurate visual search is challenging due to factors such as changing object viewpoints, scale, partial occlusions, varying backgrounds and imaging conditions. Additionally, today’s systems must be highly scalable to accommodate the the huge volumes of multimedia data, which can comprise billions of images.
In order to overcome these challenges, a compact and discriminative image representation is required. Typically, this is achieved by the aggregation of multiple local descriptors from an image into a single highdimensional global descriptor. The similarity of the visual content in two images is determined using a distance metric (e.g. Hamming or Euclidean distance) between their corresponding global descriptors. The retrieval is accomplished by calculating a ranking based on the distances between a set of images to a given query image.
This paper addresses the task of extracting a global descriptor by means of aggregating local deep descriptors. We achieve this using a novel combined CNN and Fisher Vector model that is learnt simultaneously. We also show our proposed model provides significant improvements in the retrieval accuracy when compared with related stateoftheart approaches across different descriptor dimensionalities and datasets.
1.1 Related Work
One popular method for generating global descriptors for image matching is the Fisher Vector (FV) method, which aggregates local image descriptors (e.g. SIFT [9]
) based on the Fisher Kernel framework. A Gaussian Mixture Model (GMM) is used to model the distribution of local image descriptors, and the global descriptor for an image is obtained by computing and concatenating the gradients of the loglikelihoods with respect to the model parameters. One advantage of the FV approach is its encoding of higher order statistics, resulting in a more discriminative representation and hence better performance
[11].The FV model is learnt using unsupervised clustering, and therefore cannot make use of matching and nonmatching labels that are available in image retrieval tasks. One way of overcoming this shortcoming was proposed by Perronnin et al. [12]
, where a fully connected neural network (NN) was trained by using the FV global descriptors as input. Here, the fishervector model was initially learnt in an unsupervised fashion on extracted SIFT features. The FV model then produces input feature vectors for the fully connected NN, which in turn is learnt in a supervised manner using backpropagation.
However, both the SIFT features and FV model in the above method are unsupervised. An alternative is to replace the lowlevel SIFTfeatures with deep convolutional descriptors obtained from convolutional neural networks (CNNs) trained on largescale datasets such as ImageNet. Recent research has shown that image descriptors computed using deep CNNs achieve stateoftheart performance for image retrieval and classification tasks. Babenko et al.
[2]aggregated deep convolutional descriptors to form global image representations: FV, Temb and SPoC. The SPoC signature is obtained by sumpooling of the deep features. Razavian et al.
[17]compute an image representation by the max pooling aggregation of the last convolutional layer. The retrieval performance was further improved when the RVDW method was used for aggregation of CNNbased deep descriptors
[6].All of the above approaches use fixed pretrained CNNs. However, these CNNs are trained for the purpose of image classification (e.g. 1000 classes of Imagenet) and may perform suboptimally in the task of image retrieval. To tackle this, Radenovic et al. [16] and Gordo et al. [4]
both proposed to use a Siamese CNN with maxpooling for aggregation. The CNN was finetuned on an image retrieval dataset. Two types of loss function were considered for optimisation: 1) the contrastive loss function
[16] and 2) the triplet loss function [4]. Both were able to achieve significant improvements from existing retrieval mAP scores. However, both these approaches use maxpooling as an aggregation method. The work proposed in this paper improves on this by employing a Fisher Vector model for aggregation instead of maxpooling. We also consider an alternative method of sumpooling and compare different aggregation methods on standard benchmarks.1.2 Contributions and Overview
The main contribution of this paper is a Siamese deep net that aggregates CNNbased local descriptors using the Fisher Vector model. Importantly, we propose to learn the parameters of the CNN and Fisher vectors simultaneously using stochastic gradient descent on the contrastive loss function. This allows us to adjust the Fisher vector model to account for changes in the distribution of the underlying CNN features as they are learnt on image retrieval datasets. We also show that our proposed method improves on the retrieval performance of the following stateoftheart approaches: Siamese CNN with maxpooling
[16] and Triplet loss with maxpooling [4]. We show that our approach achieves mAP scores that equal or improve on state of the art results for the Oxford (81.5%) and Paris datasets (82.5%). Importantly, this was achieved without any segmentation of images used in [4]. We also provide a new baseline of retrieval performance of our method when 1 million distractors are included into the test datasets.2 Deep Fisher Vector Siamese Network
In this section, we describe the novel DNN that will learn a deep fisher vector representation by simultaneously learning the fishervector model components along with the underlying convolutional filter weights in a Siamese network. The overview diagram of the proposed deep Siamese Fisher Vector network is shown in Fig. 1.
Traditionally, a Siamese network consists of two parallel branches in the network, where both branches share the same convolutional weights. One branch is fed a query image and the other branch a reference image which propagate through the network yielding 2 global descriptors respectively, which can be compared using Euclidean distance. Our proposed Siamese network is different in that each branch consists of two components: a CNN for producing deep image descriptors that are then aggregated via a Fisher Vector layer to produce the final global descriptor.
2.1 CNNbased Deep Descriptors
Suppose the input image is given as . In order to extract the deep convolutional descriptors from the CNN component, the input image is first passed through number of convolutional layers. Here, we use convolutional layers with the same structure as the VGG16 [18] network with the fully connected layers removed.
The CNN is effectively parameterised by the filter weights at each of its convolutional layers. We shall denote the collection of all the CNN filter weights as . Formally, the CNN component can then be described by the function , where is the final number of convolutional filters, each producing a convolutional image of size . We then treat the final layer as producing a set of number of deep convolutional features that are of dimension .
2.2 Fisher Vectors
In order to aggregate the deep convolutional features, we employ the method of Fisher Vectors. Firstly, let be the set of
dimensional deep convolutional features extracted from an image
. Letbe an imageindependent probability density function which models the generative process of
, where represents the parameters of .A Gaussian Mixture model (GMM) [13], is used to model the distribution of the convolutional features, where:
We represent the parameters of the component GMM by where are respectively the weight, mean vector and covariance matrix of Gaussian . The covariance matrix of each GMM component is assumed to be diagonal and is denoted by . The GMM assigns each descriptor to Gaussian with the soft assignment weight () given by the posteriori probability:
(1) 
The GMM can be interpreted as a probabilistic visual vocabulary, where each Gaussian forms a visual word or cluster. The dimensional derivative with respect to the mean of Gaussian is denoted by :
(2) 
We denote the elements of of as . The final FV representation used, , of image is obtained by concatenating the gradients for all Gaussians and normalising, giving: , with , where . The dimensionality of is . Since the FV will be integrated into a SiameseCNN, we shall henceforth refer to as “SIAMFV” for SIAMeseCNNbased Fisher Vector.
2.3 Fisher Vector Partial Derivatives
In this section, the partial derivatives of the Fisher vector with respect to its underlying parameters () are given. These partial derivatives will be used for learning the proposed deep net. Firstly, we give the partial derivatives for the element () of for some cluster and dimension, :
(3)  
(4)  
(5)  
(6) 
The partial derivatives of in the above equations are detailed in Appendix A. The equations Eq. 3  5
are used for calculating the gradients of the cluster prior, cluster mean and cluster standard deviation in the FV model. Eq.
6 is used to backpropagate errors to the filter weights in the CNN component. We find that the partial derivatives of the final normalised fisher vector elements all have the following form:(7) 
In order to obtain the exact partial derivative of with respect to a particular parameter, we substitute with this parameter, look up the corresponding equation in Eq. 36, and substitute it into Eq. 7 above.
3 Deep Learning of Fisher Vector Parameters
It is possible to learn the Fisher Vector GMM parameters using the EM algorithm on the deep convolutional features. However, this is an unsupervised method that does not make use of available labelling information. In order to tackle this shortcoming, we propose performing supervised learning of the GMM parameters. To this end, we treat the learning of the GMM parameters as part of learning process of a DNN.
For the purpose of learning, we are given a training dataset of pairs of images, each image with resolution . Each pair of training images is associated with a label, where 1 denotes matching images and 0 denotes nonmatching images. We denote the training dataset as: , where and . The value of the labels of is 0 for matching examples and 1 for nonmatching examples.
Next, we describe the contrastive loss [5] used for learning the proposed FVCNN network. Firstly, Euclidean distance is used to measure the difference between two Fisher vectors: .
Now, let the CNN weights be and the set of all the Fisher Vector parameters . The loss function is defined as:
(8) 
where
is the heuristically determined margin parameter.
In order to optimise the GMM and cluster weight parameters of the Fisher vector, , the partial derivatives of with respect to these respective parameters: are used. For conciseness, we will not write the arguments when referring to the distance function
. So, using the chain rule on
gives:(9) 
The first backpropagated partial derivative determines the amount of error present in the Fisher vectors of matching or nonmatching pairs. The partial derivatives allows us to adjust the FV model parameters and can similarly be derived using the chain rule, giving:
(10)  
where and are the 2 input Fisher vectors to the distance function and the partial derivatives of and detailed in Section 2.3.
The parameters are then updated by adding the present value to their respective partial derivatives multiplied by the learning rate : .
Updating CNN Weights
The updating of the CNN weights is performed in a similar manner to the standard backpropagation, with the following difference: The gradients backpropagated from the contrastive loss and fisher layer is given by: from Eq. 9 and 10, with the partial derivatives (Eq. 6) inserted in place of . Since the CNN part is located below the Fisher vector layer, the above Fisher Vector gradients will then be propagated downwards to update the CNN weights .
4 Experiments
For our experiments, the siamese network was learned on the Landmarks dataset used in [16]. Testing was performed on two independent datasets: Paris [15] and Oxford Buildings [14] with the mean average precision score reported. To test large scale retrieval, these datasets are combined with 1 million Flickr images [3], forming the Oxford1M and Paris1M dataset respectively. We followed the standard evaluation procedure and crop the query images of Oxford and Paris dataset, with the provided bounding box. The PCA transformation matrix is trained on the independent dataset to remove any bias.
4.1 Network Details
For the CNN component, the convolutional layers and respective filter weights of the VGG16 network [18]
was used. However, the maxpooling and ReLU layer at the final convolution layer was removed. 8 clusters was used for the Fisher vector GMM model, with their parameters initialised using the EM algorithm. This resulted in FV of dimensionality 4096. For retrieval purposes, we then perform PCA or LDA and whitening on the 4096D Fisher vector, reducing dimensionalities to: 128D, 256D and 512D. In order to learn the PCA or LDA model, when the Oxford Buildings dataset is tested, the Paris dataset is used to build the PCA/LDA model, and vice versa. The contrastive loss margin parameter was set to
. We set the learning rate equal to 0.001, weight decay 0.0005 and momentum 0.5. Training is performed to at most 30 epochs.
(a) Contrastive Loss  (b) mAP of Oxford Dataset  (c) mAP of Paris dataset. 
4.2 Mining NonMatching Examples
There exists significantly more nonmatching pairs compared with matching pairs. Therefore, exhaustive use of nonmatching pairs will create a large imbalance in the number of matching and nonmatching pairs used for training. In order to tackle this, only a subset of nonmatching examples are selected via mining, which allows the selection of only “hard” examples used in [16]. Here, for each matching pair of images used, 5 of the closest nonmatching examples to the query are used to form the nonmatching pairs.
In this paper, 2000 matching pairs from the Landmarks dataset are randomly chosen. For each matching pair, 5 closest nonmatching examples are then chosen, forming a 5tuple, consisting of the following: query example; matching example; 5 nonmatching examples. This forms a training set of pairs. This set of 12K pairs will be remined after 2000 iterations. In total, each epoch in the training cycle consists of 6000 iterations.
4.3 Results
In this section, we evaluate the different components of our system in terms of: retrieval performance of the SIAMFV descriptor across different epochs; projection methods (PCA vs LDA); dimensionality of the SIAMFV descriptor; and compare the performance to the latest stateoftheart algorithms.
(a) Oxford  (b) Paris 
Learning
The behaviour of the contrastive loss during learning is shown in Fig.2. It can be seen here that the initial 20 epochs give a large reduction in the loss value, and subsequent epochs producing only small further improvements in the loss function. In terms of the mAP results on the test datasets of Oxford and Paris, we find that the greatest improvement is obtained from the initial 5 epochs, with approximately 1416% improvement in mAP scores. This can be seen in Fig. 2b) for the Oxford dataset and Fig. 2c for the Paris dataset. Examples of the retrieved images based on the SIAMFV descriptor for the Oxford and Paris datasets can be seen in Fig. 5 and 6 respectively.
Projection Methods: PCA vs LDA
Fig. 3 shows the mAP results achieved by employing PCA and LDA for dimensionality reduction on the Oxford dataset. In [16], it was found that for maxpooling aggregation, LDA provided better performance at 80.0%, compared to PCA 76.1%. However, the converse was found for our SIAMFV descriptor, which achieves 81.5% with PCA and 77.1% using LDA. This was also found to be the case when sumpooling was used, with 79.5% for PCA vs 74.8% using LDA. Thus for the remaining experiments, we have employed PCA as our choice for dimensionality reduction.
Dimensionality of SIAMFV
Figure. 4a,b, demonstrates the performance of SIAMFV signature when reduced to different dimensionalities via PCA+Whitening. As expected, the best performance is obtained when the dimensionality is highest, at 512D for both Oxford and Paris datasets. Crucially, the proposed SIAMFV has a mAP score that is approximately 2% higher than sumpooling and 4% higher than maxpooling on the Oxford dataset across all dimensionalities 128D,256D and 512D. This gain in performance is similar for the Paris dataset, with the SIAMFV method outperforming both sumpooling and maxpooling across all dimensionalities.
Comparison with StateoftheArt
This section compares the performance of the proposed method to the stateoftheart algorithms. Table 1 summarises the results for medium footprint signatures (4k512 dimensions). It can be seen that the proposed SIAMFV representation outperforms most of the priorart methods. On Paris dataset, the RMAC representation provides marginally better performance. Note that RMAC used region based pooling where deep features are maxpooled in several regions of an image using multiscale grid.
Gordo et al. [4] achieved 83.1% on Oxford dataset. However they employed a region proposal network and extracted MAC signatures from 256 regions in an image, significantly increasing the extracting complexity of the representation.
We now focus on a comparison of compact representations which are practicable in largescale retrieval, as presented in Table 2. The dimensionality of the SIAMFV descriptor is reduced from 4096 to 128 via PCA. The results show that our method outperforms all presented methods. On the large dataset of Oxford1M SIAMFV provides a gain of +2.4% compared to latest MAC* signature.
Method  Size  Oxf5k  Oxf105k  Paris6k 

TEmb [7]  1024  56.0  50.2   
NetVLAD [1]  4096  71.6    79.7 
MAC [16]  512  58.3  49.2  72.6 
RMAC [19]  512  66.9  61.6  83.0 
CroW [8]  512  68.2  63.2  79.7 
MAC* [16]  512  80.0  75.1  82.9 
SUM Pool  512  79.5  75.0  81.3 
SIAMFV  512  81.5  76.6  82.4 
Method  Size  Oxf5k  Oxf105k  Oxf1M  Paris6k  Paris1M 

Maxpooling [17]  256  53.3      67.0   
SPoC [2]  256  53.1  50.1       
MAC [16]  256  56.9  47.8    72.4   
NetVLAD [1]  256  63.5      73.5   
CroW [8]  256  65.4  59.3    77.9   
Ng et al [10]  128  59.3      59.0   
MAC* [16]  128  76.8  70.8  60.1  78.8  62.5 
SUM Pool  128  72.6  67.7  57.9  78.4  62.4 
SIAMFV  128  77.3  71.8  62.5  78.9  63.2 
5 Conclusions
In this paper, we have proposed a robust and discriminative image representation by aggregating deep descriptors using Fisher vectors. We have also proposed a novel learning method that allows us to simultaneously finetunes the deep descriptors and adapt the Fisher vector GMM model parameters accordingly. This effectively allows us to perform supervised learning of the Fisher vector model using matching and nonmatching labels by optimising the contrastive loss. The result is a CNNbased Fisher vector (SIAMFV) global descriptor. We have also found that PCA was a more suitable dimensionality reduction method compared with LDA when used with the SIAMFV representation. We have shown that this model produces significant improvements in the retrieval mean average precision scores. On the large scale datasets, Oxford1M and Paris1M, SIAMFV representation achieves a mAP of 62.5% and 63.2%, all yielding superior performance to the stateoftheart.
Appendix A Partial Derivatives of
We find that the partial derivatives of with respect to and both have the same form. So, let be either , or . Also, let the numerator of be denoted as and its denominator , so , then:
where,
References

[1]
R. Arandjelović, P. Gronat, A. Torii, T. Pajdla, and J. Sivic.
NetVLAD: CNN architecture for weakly supervised place
recognition.
In
IEEE Conference on Computer Vision and Pattern Recognition
, 2016.  [2] A. Babenko and V. S. Lempitsky. Aggregating deep convolutional features for image retrieval. CoRR, 2015.
 [3] M. Bober, S. Husain, S. Paschalakis, and K. Wnukowicz. Improvements to TM6.0 with a robust visual descriptor – proposal from University of Surrey and Visual Atoms. In MPEG Standardisation contribution : ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO, M30311, jul 2013.
 [4] A. Gordo, J. Almazán, J. Revaud, and D. Larlus. Deep Image Retrieval: Learning Global Representations for Image Search, pages 241–257. IEEE Computer Society, 2016.
 [5] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In Proc. of CVPR 2006, CVPR ’06, pages 1735–1742, Washington, DC, USA, 2006. IEEE Computer Society.
 [6] S. S. Husain and M. Bober. Improving largescale image retrieval through robust aggregation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016.
 [7] H. Jégou and A. Zisserman. Triangulation embedding and democratic aggregation for image search. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
 [8] Y. Kalantidis, C. Mellina, and S. Osindero. Crossdimensional weighting for aggregated deep convolutional features. CoRR, 2015.
 [9] D. G. Lowe. Distinctive image features from scaleinvariant keypoints. International Journal of Computer Vision, pages 91–110, 2004.
 [10] J. Y. H. Ng, F. Yang, and L. S. Davis. Exploiting local features from deep networks for image retrieval. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 53–61, 2015.
 [11] F. Perronnin and C. R. Dance. Fisher kernels on visual vocabularies for image categorization. In IEEE Conference on Computer Vision and Pattern Recognition, 2007.
 [12] F. Perronnin and D. Larlus. Fisher vectors meet neural networks: A hybrid classification architecture. In Proc. of CVPR, pages 3743–3752. IEEE Computer Society, 2015.
 [13] F. Perronnin, Y. Liu, J. Sanchez, and H. Poirier. Largescale image retrieval with compressed fisher vectors. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3384–3391, 2010.
 [14] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching. In IEEE Conference on Computer Vision and Pattern Recognition, 2007.
 [15] J. Philbin, M. Isard, J. Sivic, and A. Zisserman. Lost in quantization: Improving particular object retrieval in large scale image databases. In IEEE Conference on Computer Vision and Pattern Recognition, 2008.
 [16] F. Radenović, G. Tolias, and O. Chum. CNN Image Retrieval Learns from BoW: Unsupervised FineTuning with Hard Examples, pages 3–20. IEEE Computer Society, 2016.
 [17] A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson. Visual instance retrieval with deep convolutional networks. CoRR, 2014.
 [18] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. CoRR, abs/1409.1556, 2014.
 [19] G. Tolias, R. Sicre, and H. Jégou. Particular object retrieval with integral maxpooling of CNN activations. CoRR, 2015.
Comments
There are no comments yet.