Fine-tuning CNN Image Retrieval with No Human Annotation

11/03/2017 ∙ by Filip Radenovic, et al. ∙ 0

Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of the representation, and the efficiency of search. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where high quality of the annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automatic manner. Reconstructed 3D models, obtained by the state-of-the-art retrieval and structure-from-motion methods, guide the selection of the training data. We show that both hard positive and hard negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance in particular object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms the commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method on VGG network achieves state-of-the-art performance on standard benchmarks: Oxford Buildings, Paris, and Holidays datasets.



There are no comments yet.


page 2

page 4

page 5

page 7

page 11

Code Repositories


CNNImageRetrieval: Training and evaluating CNNs for Image Retrieval

view repo


Guide to the Content Based Image Retrieval

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.