Probabilistic Knowledge Transfer for Deep Representation Learning

03/28/2018
by   Nikolaos Passalis, et al.
0

Knowledge Transfer (KT) techniques tackle the problem of transferring the knowledge from a large and complex neural network into a smaller and faster one. However, existing KT methods are tailored towards classification tasks and they cannot be used efficiently for other representation learning tasks. In this paper a novel knowledge transfer technique, that is capable of training a student model that maintains the same amount of mutual information between the learned representation and a set of (possible unknown) labels as the teacher model, is proposed. Apart from outperforming existing KT techniques, the proposed method allows for overcoming several limitations of existing methods providing new insight into KT as well as novel KT applications, ranging from knowledge transfer from handcrafted feature extractors to cross-modal KT from the textual modality into the representation extracted from the visual modality of the data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2023

Enhanced Multimodal Representation Learning with Cross-modal KD

This paper explores the tasks of leveraging auxiliary modalities which a...
research
04/11/2019

Variational Information Distillation for Knowledge Transfer

Transferring knowledge from a teacher neural network pretrained on the s...
research
12/15/2020

Wasserstein Contrastive Representation Distillation

The primary goal of knowledge distillation (KD) is to encapsulate the in...
research
11/25/2022

XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video Representation Learning

We present XKD, a novel self-supervised framework to learn meaningful re...
research
09/08/2022

Cross-Modal Knowledge Transfer Without Task-Relevant Source Data

Cost-effective depth and infrared sensors as alternatives to usual RGB s...
research
11/21/2021

TraVLR: Now You See It, Now You Don't! Evaluating Cross-Modal Transfer of Visio-Linguistic Reasoning

Numerous visio-linguistic (V+L) representation learning methods have bee...
research
02/04/2023

TAP: The Attention Patch for Cross-Modal Knowledge Transfer from Unlabeled Data

This work investigates the intersection of cross modal learning and semi...

Please sign up or login with your details

Forgot password? Click here to reset