Large language models (LLMs) have shown remarkable capabilities in langu...
Generative modeling has recently undergone remarkable advancements, prim...
Structural pruning enables model acceleration by removing
structurally-g...
In this paper, we explore a new knowledge-amalgamation problem, termed
F...
Data-free knowledge distillation (DFKD) conducts knowledge distillation ...
Knowledge amalgamation (KA) is a novel deep model reusing task aiming to...
Data-free knowledge distillation (DFKD) has recently been attracting
inc...
Knowledge distillation (KD) aims to craft a compact student model that
i...
Model inversion, whose goal is to recover training data from a pre-train...
It is an innate ability for humans to imagine something only according t...
Knowledge Distillation (KD) has made remarkable progress in the last few...
An increasing number of well-trained deep networks have been released on...