Dual Correction Strategy for Ranking Distillation in Top-N Recommender System

09/08/2021
by   Youngjune Lee, et al.
0

Knowledge Distillation (KD), which transfers the knowledge of a well-trained large model (teacher) to a small model (student), has become an important area of research for practical deployment of recommender systems. Recently, Relaxed Ranking Distillation (RRD) has shown that distilling the ranking information in the recommendation list significantly improves the performance. However, the method still has limitations in that 1) it does not fully utilize the prediction errors of the student model, which makes the training not fully efficient, and 2) it only distills the user-side ranking information, which provides an insufficient view under the sparse implicit feedback. This paper presents Dual Correction strategy for Distillation (DCD), which transfers the ranking information from the teacher model to the student model in a more efficient manner. Most importantly, DCD uses the discrepancy between the teacher model and the student model predictions to decide which knowledge to be distilled. By doing so, DCD essentially provides the learning guidance tailored to "correcting" what the student model has failed to accurately predict. This process is applied for transferring the ranking information from the user-side as well as the item-side to address sparse implicit user feedback. Our experiments show that the proposed method outperforms the state-of-the-art baselines, and ablation studies validate the effectiveness of each component.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2020

DE-RRD: A Knowledge Distillation Framework for Recommender System

Recent recommender systems have started to employ knowledge distillation...
research
09/19/2018

Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System

We propose a novel way to train ranking models, such as recommender syst...
research
03/02/2023

Distillation from Heterogeneous Models for Top-K Recommendation

Recent recommender systems have shown remarkable performance by using an...
research
07/11/2019

Privileged Features Distillation for E-Commerce Recommendations

Features play an important role in most prediction tasks of e-commerce r...
research
11/13/2019

Collaborative Distillation for Top-N Recommendation

Knowledge distillation (KD) is a well-known method to reduce inference l...
research
02/20/2022

Cross-Task Knowledge Distillation in Multi-Task Recommendation

Multi-task learning (MTL) has been widely used in recommender systems, w...
research
07/15/2021

An Educational System for Personalized Teacher Recommendation in K-12 Online Classrooms

In this paper, we propose a simple yet effective solution to build pract...

Please sign up or login with your details

Forgot password? Click here to reset