Data-Efficient Ranking Distillation for Image Retrieval

07/10/2020
by   Zakaria Laskar, et al.
0

Recent advances in deep learning has lead to rapid developments in the field of image retrieval. However, the best performing architectures incur significant computational cost. Recent approaches tackle this issue using knowledge distillation to transfer knowledge from a deeper and heavier architecture to a much smaller network. In this paper we address knowledge distillation for metric learning problems. Unlike previous approaches, our proposed method jointly addresses the following constraints i) limited queries to teacher model, ii) black box teacher model with access to the final output representation, and iii) small fraction of original training data without any ground-truth labels. In addition, the distillation method does not require the student and teacher to have same dimensionality. Addressing these constraints reduces computation requirements, dependency on large-scale training datasets and addresses practical scenarios of limited or partial access to private data such as teacher models or the corresponding training data/labels. The key idea is to augment the original training set with additional samples by performing linear interpolation in the final output representation space. Distillation is then performed in the joint space of original and augmented teacher-student sample representations. Results demonstrate that our approach can match baseline models trained with full supervision. In low training sample settings, our approach outperforms the fully supervised approach on two challenging image retrieval datasets, ROxford5k and RParis6k <cit.> with the least possible teacher supervision.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2022

Black-box Few-shot Knowledge Distillation

Knowledge distillation (KD) is an efficient approach to transfer the kno...
research
07/19/2022

Context Unaware Knowledge Distillation for Image Retrieval

Existing data-dependent hashing methods use large backbone networks with...
research
06/07/2021

Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model

Knowledge distillation (KD) is a successful approach for deep neural net...
research
03/16/2023

Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval

Previous Knowledge Distillation based efficient image retrieval methods ...
research
06/29/2020

Asymmetric metric learning for knowledge transfer

Knowledge transfer from large teacher models to smaller student models h...
research
02/23/2021

Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation

Knowledge distillation refers to a technique of transferring the knowled...
research
06/24/2021

DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval

In this paper, we address the problem of high performance and computatio...

Please sign up or login with your details

Forgot password? Click here to reset