DE-RRD: A Knowledge Distillation Framework for Recommender System

12/08/2020
by   SeongKu Kang, et al.
0

Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance. The state-of-the-art methods have only focused on making the student model accurately imitate the predictions of the teacher model. They have a limitation in that the prediction results incompletely reveal the teacher's knowledge. In this paper, we propose a novel knowledge distillation framework for recommender system, called DE-RRD, which enables the student model to learn from the latent knowledge encoded in the teacher model as well as from the teacher's predictions. Concretely, DE-RRD consists of two methods: 1) Distillation Experts (DE) that directly transfers the latent knowledge from the teacher model. DE exploits "experts" and a novel expert selection strategy for effectively distilling the vast teacher's knowledge to the student with limited capacity. 2) Relaxed Ranking Distillation (RRD) that transfers the knowledge revealed from the teacher's prediction with consideration of the relaxed ranking orders among items. Our extensive experiments show that DE-RRD outperforms the state-of-the-art competitors and achieves comparable or even better performance to that of the teacher model with faster inference time.

READ FULL TEXT
research
09/19/2018

Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System

We propose a novel way to train ranking models, such as recommender syst...
research
09/08/2021

Dual Correction Strategy for Ranking Distillation in Top-N Recommender System

Knowledge Distillation (KD), which transfers the knowledge of a well-tra...
research
11/27/2022

Unbiased Knowledge Distillation for Recommendation

As a promising solution for model compression, knowledge distillation (K...
research
07/16/2019

Light Multi-segment Activation for Model Compression

Model compression has become necessary when applying neural networks (NN...
research
06/05/2021

Bidirectional Distillation for Top-K Recommender System

Recommender systems (RS) have started to employ knowledge distillation, ...
research
06/16/2021

Topology Distillation for Recommender System

Recommender Systems (RS) have employed knowledge distillation which is a...
research
11/13/2019

Collaborative Distillation for Top-N Recommendation

Knowledge distillation (KD) is a well-known method to reduce inference l...

Please sign up or login with your details

Forgot password? Click here to reset