Semi-Supervised Exploration in Image Retrieval

by   Cheng Chang, et al.
Layer 6 AI

We present our solution to Landmark Image Retrieval Challenge 2019. This challenge was based on the large Google Landmarks Dataset V2[9]. The goal was to retrieve all database images containing the same landmark for every provided query image. Our solution is a combination of global and local models to form an initial KNN graph. We then use a novel extension of the recently proposed graph traversal method EGT [1] referred to as semi-supervised EGT to refine the graph and retrieve better candidates.



There are no comments yet.


page 2


2nd Place and 2nd Place Solution to Kaggle Landmark Recognition andRetrieval Competition 2019

We present a retrieval based system for landmark retrieval and recogniti...

3rd Place Solution to "Google Landmark Retrieval 2020"

Image retrieval is a fundamental problem in computer vision. This paper ...

Video Temporal Relationship Mining for Data-Efficient Person Re-identification

This paper is a technical report to our submission to the ICCV 2021 VIPr...

Detect-to-Retrieve: Efficient Regional Aggregation for Image Search

Retrieving object instances among cluttered scenes efficiently requires ...

Landmark Map: An Extension of the Self-Organizing Map for a User-Intended Nonlinear Projection

The self-organizing map (SOM) is an unsupervised artificial neural netwo...

Semi-supervised lung nodule retrieval

Content based image retrieval (CBIR) provides the clinician with visual ...

Instance-level Image Retrieval using Reranking Transformers

Instance-level image retrieval is the task of searching in a large datab...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Approach


Image retrieval is a fundamental problem in computer vision, where the goal is to rank relevant images from an index set given a query image. Landmark retrieval in particular is an important task as people often take photographs that contain landmarks. The Google Landmark Retrieval 2019 challenge aims to advance research on this task, introducing the largest worldwide dataset to date and providing a standardized framework for benchmark. The challenge involves retrieving the top 100 candidates from an index set of 700K images for each of the 100K query in the test set. A training set of 4.1 million images with over 200K unique landmarks, which we call Train-V2. Additionally, we use the Google Landmarks V1

[7] train set with 1 million images and 30K unique landmarks, which we call Train-V1.

Figure 1 outlines our pipeline. Global CNN descriptors generate inner product distance used to build a -nearest neighbor (KNN) graph . We then apply the recently proposed EGT algorithm to further improve retrieval. EGT builds trusted paths on , alternating between exploring neighbors and exploiting most confident edges. Starting with query as the only trusted vertex, the explore step adds neighbors of trusted vertices to a priority queue ordered by edge weights. The exploit step then retrieves all vertices that have traversed edge weights larger than a threshold , called trusted vertices. The path formed is referred to as the trusted path, and the explore/exploit steps are repeated. The motivation is that relevant images may be visually dissimilar, but share a similar image that can “bridge” the gap. However this approach can fail when no such image exists in the index, limiting exploration. In order to overcome this, we propose the semi-supervised EGT to expand exploration.

Figure 1: Overview of our model pipeline: two CNN modes (GEM and DIR) are used to extract global descriptors. We concatenate them to get the blended global descriptors. Local descriptors were extracted from DELF-V2 model. Then a query expansion method with spatial verification (denoted as QE-SV) is applied to obtain a better KNN graph. To leverage label information from Train-V2, we propose semi-supervised EGT (referred as SemiSup-EGT).

Semi-Supervised EGT

The main idea of our approach is to leverage label information from a Train-V2. For each label , corresponding images in Train-V2 are connected to it, resulting in the set of sub-graphs . The edges are set to maximum weights to indicate highest level of similarities. An example for a particular is shown in the left part of Figure 2. During retrieval, one of the sub-graph is then added to , potentially introducing new paths between query and index images. An advantage of this approach is that paths between queries and indices can be formed using the the corresponding as bridges, regardless of their visual similarities.

In order to connect query to , we calculate the pair-wise similarity scores between the query and each images in Train-V2. Then, a majority vote on the top-3 candidates’ labels select a particular label for the query. In the event of a tie, no label is selected and we proceed without using Train-V2, otherwise the most similar image in Train-V2 with label is connected to the query with maximum edge weight. This process is repeated for each of the index images. Figure 2 shows an example of this algorithm. The black path on the right depict the original EGT graph, and the blue path on the left depict the new paths introduced through . Two relevant images in the index set, shown in green, shares the same as query, and are thus retrieved.

Constructing the graph this way ensures that the connected sub-graph is given precedence when building the trusted paths by EGT, consistent with the motivation of EGT, where more trustworthy neighbors are explored with higher priority. Once the graph is built, we apply EGT, skipping retrieval for the Train-V2 images as they are not part of the index set to be retrieved.

Figure 2: Example of semi-supervised EGT on the Google Landmark Challenge 2019 datasets. Query and Train-V2 are highlighted in blue and purple respectively. For index image set, relevant images are highlighted in green while irrelevant ones are in red. The trusted paths introduced by semi-supervised EGT are highlighted in blue. Best viewed in color.

2 Experiments

Figure 1 depicts our pipeline. We use two global descriptor models GeM [8] and DIR [5]. The GeM model is trained with a ResNet-101 backbone[6]

pre-trained on ImageNet

[3] with GeM pooling and a fully connected whitening layer as described in [8]

. All the trainable parameters are fine-tuned on Train-V1 dataset. We concatenated the GeM and DIR vectors to obtain our global descriptors referred to as the blend model. The top k (k=100) candidates ranked by their inner product with query based on the blend model form our initial KNN graph.

We use DELF-V2 [9] as local descriptors, applying RANSAC-based spatial verification [4] (SV) to re-rank top 10 index image candidates for each query. Among the re-ranked candidates, we use two most reliable index images to perform query expansion (QE) [2]. This process is extended to database side by issuing every index image as query. This is followed by our semi-supervised EGT approach that further refines the graph and improves the retrieval mAP.


Result (mAP)
Method Public Private
Blend 0.1753 0.2030
Blend+QE-SV 0.2150 0.2290
Blend+QE-SV+EGT 0.2545 0.2672
Blend+QE-SV+SemiSup-EGT 0.2964 0.3218
Table 1: Ablation of the proposed pipeline on public and private leaderboard.

We present our pipeline’s results on public and private set as shown in Table 1. Our baseline is concatenation of GeM and DIR embedding (blend) followed by inner product retrieval. Applying QE on spatially verified retrieved images (QE-SV) improves performance, and further applying EGT to the pipeline gives another significant improvement. EGT bridges query and index images that share visual similarity with other images but are otherwise dissimilar based on global or local descriptors.

With the proposed semi-supervised EGT, we achieve a further 3.5 point improvement on the public leaderboard and over 5 point improvement on the private leaderboard. The proposed semi-supervised EGT extends EGT to leverage additional labeled data outside the index set by expanding on the idea of traversing trusted edges in EGT. Shared label information between the Train-V2 and index set images form additional bridges between visually dissimilar images which is difficult in the original unsupervised version of EGT that rely on global and local descriptors.


In this paper, we describe our approach for the 2019 Landmark Retrieval Challenge. We present a novel semi-supervised approach that extends EGT when an additional labeled dataset is available. The model achieves very competitive score on the challenge without relying on data cleaning methods.


  • [1] C. Chang, G. Yu, C. Liu, and M. Volkovs. Explore-Exploit Graph Traversal for Image Retrieval. In CVPR19, 2019.
  • [2] O. Chum, A. Mikulík, M. Perdoch, and J. Matas. Total recall ii: Query expansion revisited. In CVPR 2011, pages 889–896, June 2011.
  • [3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
  • [4] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381–395, June 1981.
  • [5] A. Gordo, J. Almazán, J. Revaud, and D. Larlus. Deep image retrieval: Learning global representations for image search. In ECCV, 2016.
  • [6] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition.

    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 770–778, 2016.
  • [7] H. Noh, A. Araujo, J. Sim, T. Weyand, and B. Han. Large-scale image retrieval with attentive deep local features. 2017 IEEE International Conference on Computer Vision (ICCV), pages 3476–3485, 2017.
  • [8] F. Radenovic, G. Tolias, and O. Chum. Fine-tuning CNN Image Retrieval with No Human Annotation. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
  • [9] M. Teichmann, A. Araujo, M. Zhu, and J. Sim. Detect-to-retrieve: Efficient regional aggregation for image search. CoRR, abs/1812.01584, 2018.