DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval

06/24/2021
by   Giorgos Kordopatis-Zilos, et al.
0

In this paper, we address the problem of high performance and computationally efficient content-based video retrieval in large-scale datasets. Current methods typically propose either: (i) fine-grained approaches employing spatio-temporal representations and similarity calculations, achieving high performance at a high computational cost or (ii) coarse-grained approaches representing/indexing videos as global vectors, where the spatio-temporal structure is lost, providing low performance but also having low computational cost. In this work, we propose a Knowledge Distillation framework, which we call Distill-and-Select (DnS), that starting from a well-performing fine-grained Teacher Network learns: a) Student Networks at different retrieval performance and computational efficiency trade-offs and b) a Selection Network that at test time rapidly directs samples to the appropriate student to maintain both high retrieval performance and high computational efficiency. We train several students with different architectures and arrive at different trade-offs of performance and efficiency, i.e., speed and storage requirements, including fine-grained students that store index videos using binary representations. Importantly, the proposed scheme allows Knowledge Distillation in large, unlabelled datasets – this leads to good students. We evaluate DnS on five public datasets on three different video retrieval tasks and demonstrate a) that our students achieve state-of-the-art performance in several cases and b) that our DnS framework provides an excellent trade-off between retrieval performance, computational speed, and storage space. In specific configurations, our method achieves similar mAP with the teacher but is 20 times faster and requires 240 times less storage space. Our collected dataset and implementation are publicly available: https://github.com/mever-team/distill-and-select.

READ FULL TEXT

page 2

page 10

page 11

research
07/19/2022

Context Unaware Knowledge Distillation for Image Retrieval

Existing data-dependent hashing methods use large backbone networks with...
research
01/18/2022

It's All in the Head: Representation Knowledge Distillation through Classifier Sharing

Representation knowledge distillation aims at transferring rich informat...
research
11/19/2018

Self-Referenced Deep Learning

Knowledge distillation is an effective approach to transferring knowledg...
research
12/20/2022

Fine-Grained Distillation for Long Document Retrieval

Long document retrieval aims to fetch query-relevant documents from a la...
research
08/20/2019

ViSiL: Fine-grained Spatio-Temporal Video Similarity Learning

In this paper we introduce ViSiL, a Video Similarity Learning architectu...
research
07/10/2020

Data-Efficient Ranking Distillation for Image Retrieval

Recent advances in deep learning has lead to rapid developments in the f...
research
09/16/2020

Collaborative Group Learning

Collaborative learning has successfully applied knowledge transfer to gu...

Please sign up or login with your details

Forgot password? Click here to reset