Ensemble Modeling with Contrastive Knowledge Distillation for Sequential Recommendation

04/28/2023
by   Hanwen Du, et al.
0

Sequential recommendation aims to capture users' dynamic interest and predicts the next item of users' preference. Most sequential recommendation methods use a deep neural network as sequence encoder to generate user and item representations. Existing works mainly center upon designing a stronger sequence encoder. However, few attempts have been made with training an ensemble of networks as sequence encoders, which is more powerful than a single network because an ensemble of parallel networks can yield diverse prediction results and hence better accuracy. In this paper, we present Ensemble Modeling with contrastive Knowledge Distillation for sequential recommendation (EMKD). Our framework adopts multiple parallel networks as an ensemble of sequence encoders and recommends items based on the output distributions of all these networks. To facilitate knowledge transfer between parallel networks, we propose a novel contrastive knowledge distillation approach, which performs knowledge transfer from the representation level via Intra-network Contrastive Learning (ICL) and Cross-network Contrastive Learning (CCL), as well as Knowledge Distillation (KD) from the logits level via minimizing the Kullback-Leibler divergence between the output distributions of the teacher network and the student network. To leverage contextual information, we train the primary masked item prediction task alongside the auxiliary attribute prediction task as a multi-task learning scheme. Extensive experiments on public benchmark datasets show that EMKD achieves a significant improvement compared with the state-of-the-art methods. Besides, we demonstrate that our ensemble method is a generalized approach that can also improve the performance of other sequential recommenders. Our code is available at this link: https://github.com/hw-du/EMKD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2019

Contrastive Representation Distillation

Often we wish to transfer representational knowledge from one neural net...
research
09/01/2021

Memory Augmented Multi-Instance Contrastive Predictive Coding for Sequential Recommendation

The sequential recommendation aims to recommend items, such as products,...
research
02/20/2022

Cross-Task Knowledge Distillation in Multi-Task Recommendation

Multi-task learning (MTL) has been widely used in recommender systems, w...
research
07/23/2022

Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition

The teacher-free online Knowledge Distillation (KD) aims to train an ens...
research
10/30/2022

Dataset Distillation via Factorization

In this paper, we study \xw{dataset distillation (DD)}, from a novel per...
research
06/28/2022

Cooperative Retriever and Ranker in Deep Recommenders

Deep recommender systems jointly leverage the retrieval and ranking oper...
research
05/25/2023

UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation

Prior study has shown that pretrained language models (PLM) can boost th...

Please sign up or login with your details

Forgot password? Click here to reset