Personalized Prompts for Sequential Recommendation

05/19/2022
by   Yiqing Wu, et al.
1

Pre-training models have shown their power in sequential recommendation. Recently, prompt has been widely explored and verified for tuning in NLP pre-training, which could help to more effectively and efficiently extract useful knowledge from pre-training models for downstream tasks, especially in cold-start scenarios. However, it is challenging to bring prompt-tuning from NLP to recommendation, since the tokens in recommendation (i.e., items) do not have explicit explainable semantics, and the sequence modeling should be personalized. In this work, we first introduces prompt to recommendation and propose a novel Personalized prompt-based recommendation (PPR) framework for cold-start recommendation. Specifically, we build the personalized soft prefix prompt via a prompt generator based on user profiles and enable a sufficient training of prompts via a prompt-oriented contrastive learning with both prompt- and behavior-based augmentations. We conduct extensive evaluations on various tasks. In both few-shot and zero-shot recommendation, PPR models achieve significant improvements over baselines on various metrics in three large-scale open datasets. We also conduct ablation tests and sparsity analysis for a better understanding of PPR. Moreover, We further verify PPR's universality on different pre-training models, and conduct explorations on PPR's other promising downstream tasks including cross-domain recommendation and user profile prediction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2023

Contrastive Graph Prompt-tuning for Cross-domain Recommendation

Recommender systems are frequently challenged by the data sparsity probl...
research
07/18/2022

Towards a General Pre-training Framework for Adaptive Learning in MOOCs

Adaptive learning aims to stimulate and meet the needs of individual lea...
research
08/22/2022

KEEP: An Industrial Pre-Training Framework for Online Recommendation via Knowledge Extraction and Plugging

An industrial recommender system generally presents a hybrid list that c...
research
06/08/2023

COURIER: Contrastive User Intention Reconstruction for Large-Scale Pre-Train of Image Features

With the development of the multi-media internet, visual characteristics...
research
08/28/2023

RecMind: Large Language Model Powered Agent For Recommendation

Recent advancements in instructing Large Language Models (LLMs) to utili...
research
05/06/2023

Attacking Pre-trained Recommendation

Recently, a series of pioneer studies have shown the potency of pre-trai...
research
08/22/2023

ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation

With large language models (LLMs) achieving remarkable breakthroughs in ...

Please sign up or login with your details

Forgot password? Click here to reset