Hierarchical Conversational Preference Elicitation with Bandit Feedback

09/06/2022
by   Jinhang Zuo, et al.
0

The recent advances of conversational recommendations provide a promising way to efficiently elicit users' preferences via conversational interactions. To achieve this, the recommender system conducts conversations with users, asking their preferences for different items or item categories. Most existing conversational recommender systems for cold-start users utilize a multi-armed bandit framework to learn users' preference in an online manner. However, they rely on a pre-defined conversation frequency for asking about item categories instead of individual items, which may incur excessive conversational interactions that hurt user experience. To enable more flexible questioning about key-terms, we formulate a new conversational bandit problem that allows the recommender system to choose either a key-term or an item to recommend at each round and explicitly models the rewards of these actions. This motivates us to handle a new exploration-exploitation (EE) trade-off between key-term asking and item recommendation, which requires us to accurately model the relationship between key-term and item rewards. We conduct a survey and analyze a real-world dataset to find that, unlike assumptions made in prior works, key-term rewards are mainly affected by rewards of representative items. We propose two bandit algorithms, Hier-UCB and Hier-LinUCB, that leverage this observed relationship and the hierarchical structure between key-terms and items to efficiently learn which items to recommend. We theoretically prove that our algorithm can reduce the regret bound's dependency on the total number of items from previous work. We validate our proposed algorithms and regret bound on both synthetic and real-world data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2022

Comparison-based Conversational Recommender System with Relative Bandit Feedback

With the recent advances of conversational recommendations, the recommen...
research
03/13/2023

Beyond Single Items: Exploring User Preferences in Item Sets with the Conversational Playlist Curation Dataset

Users in consumption domains, like music, are often able to more efficie...
research
03/01/2023

Efficient Explorative Key-term Selection Strategies for Conversational Contextual Bandits

Conversational contextual bandits elicit user preferences by occasionall...
research
05/23/2020

Seamlessly Unifying Attributes and Items: Conversational Recommendation for Cold-Start Users

Static recommendation methods like collaborative filtering suffer from t...
research
04/30/2021

Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sampling

We consider the problem of recommending relevant content to users of an ...
research
08/31/2022

Rethinking Conversational Recommendations: Is Decision Tree All You Need?

Conversational recommender systems (CRS) dynamically obtain the user pre...
research
04/14/2021

When and Whom to Collaborate with in a Changing Environment: A Collaborative Dynamic Bandit Solution

Collaborative bandit learning, i.e., bandit algorithms that utilize coll...

Please sign up or login with your details

Forgot password? Click here to reset