Data-Free Knowledge Transfer: A Survey

12/31/2021
by   Yuang Liu, et al.
40

In the last decade, many deep learning models have been well trained and made a great success in various fields of machine intelligence, especially for computer vision and natural language processing. To better leverage the potential of these well-trained models in intra-domain or cross-domain transfer learning situations, knowledge distillation (KD) and domain adaptation (DA) are proposed and become research highlights. They both aim to transfer useful information from a well-trained model with original training data. However, the original data is not always available in many cases due to privacy, copyright or confidentiality. Recently, the data-free knowledge transfer paradigm has attracted appealing attention as it deals with distilling valuable knowledge from well-trained models without requiring to access to the training data. In particular, it mainly consists of the data-free knowledge distillation (DFKD) and source data-free domain adaptation (SFDA). On the one hand, DFKD aims to transfer the intra-domain knowledge of original data from a cumbersome teacher network to a compact student network for model compression and efficient inference. On the other hand, the goal of SFDA is to reuse the cross-domain knowledge stored in a well-trained source model and adapt it to a target domain. In this paper, we provide a comprehensive survey on data-free knowledge transfer from the perspectives of knowledge distillation and unsupervised domain adaptation, to help readers have a better understanding of the current research status and ideas. Applications and challenges of the two areas are briefly reviewed, respectively. Furthermore, we provide some insights to the subject of future research.

READ FULL TEXT
research
04/29/2021

Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer

Recent applications pose requirements of both cross-domain knowledge tra...
research
01/15/2021

Data Impressions: Mining Deep Models to Extract Samples for Data-free Applications

Pretrained deep models hold their learnt knowledge in the form of the mo...
research
02/23/2023

A Comprehensive Survey on Source-free Domain Adaptation

Over the past decade, domain adaptation has become a widely studied bran...
research
04/01/2016

Adapting Models to Signal Degradation using Distillation

Model compression and knowledge distillation have been successfully appl...
research
07/17/2023

Domain Knowledge Distillation from Large Language Model: An Empirical Study in the Autonomous Driving Domain

Engineering knowledge-based (or expert) systems require extensive manual...
research
06/07/2020

ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression

Despite the recent progress of network pruning, directly applying it to ...
research
06/03/2023

Deep Classifier Mimicry without Data Access

Access to pre-trained models has recently emerged as a standard across n...

Please sign up or login with your details

Forgot password? Click here to reset