Fine-tuning CNN Image Retrieval with No Human Annotation
Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of the representation, and the efficiency of search. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where high quality of the annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automatic manner. Reconstructed 3D models, obtained by the state-of-the-art retrieval and structure-from-motion methods, guide the selection of the training data. We show that both hard positive and hard negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance in particular object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms the commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method on VGG network achieves state-of-the-art performance on standard benchmarks: Oxford Buildings, Paris, and Holidays datasets.
READ FULL TEXT