Cooperative Retriever and Ranker in Deep Recommenders

06/28/2022
by   Xu Huang, et al.
4

Deep recommender systems jointly leverage the retrieval and ranking operations to generate the recommendation result. The retriever targets selecting a small set of relevant candidates from the entire items with high efficiency; while the ranker, usually more precise but time-consuming, is supposed to identify the best items out of the retrieved candidates with high precision. However, the retriever and ranker are usually trained in poorly-cooperative ways, leading to limited recommendation performances when working as an entirety. In this work, we propose a novel DRS training framework CoRR(short for Cooperative Retriever and Ranker), where the retriever and ranker can be mutually reinforced. On one hand, the retriever is learned from recommendation data and the ranker via knowledge distillation; knowing that the ranker is more precise, the knowledge distillation may provide extra weak-supervision signals for the improvement of retrieval quality. On the other hand, the ranker is trained by learning to discriminate the truth positive items from hard negative candidates sampled from the retriever. With the iteration going on, the ranker may become more precise, which in return gives rise to informative training signals for the retriever; meanwhile, with the improvement of retriever, harder negative candidates can be sampled, which contributes to a higher discriminative capability of the ranker. To facilitate the effective conduct of CoRR, an asymptotic-unbiased approximation of KL divergence is introduced for the knowledge distillation over sampled items; besides, a scalable and adaptive strategy is developed to efficiently sample from the retriever. Comprehensive experimental studies are performed over four large-scale benchmark datasets, where CoRR improves the overall recommendation quality resulting from the cooperation between retriever and ranker.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2021

False Negative Distillation and Contrastive Learning for Personalized Outfit Recommendation

Personalized outfit recommendation has recently been in the spotlight wi...
research
08/15/2023

Learning from All Sides: Diversified Positive Augmentation via Self-distillation in Recommendation

Personalized recommendation relies on user historical behaviors to provi...
research
08/07/2022

Generating Negative Samples for Sequential Recommendation

To make Sequential Recommendation (SR) successful, recent works focus on...
research
02/20/2022

Cross-Task Knowledge Distillation in Multi-Task Recommendation

Multi-task learning (MTL) has been widely used in recommender systems, w...
research
04/28/2023

Ensemble Modeling with Contrastive Knowledge Distillation for Sequential Recommendation

Sequential recommendation aims to capture users' dynamic interest and pr...
research
06/16/2022

Towards Robust Ranker for Text Retrieval

A ranker plays an indispensable role in the de facto 'retrieval rera...
research
04/01/2022

Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings

Vector quantization (VQ) based ANN indexes, such as Inverted File System...

Please sign up or login with your details

Forgot password? Click here to reset