When and Whom to Collaborate with in a Changing Environment: A Collaborative Dynamic Bandit Solution

04/14/2021
by   Chuanhao Li, et al.
5

Collaborative bandit learning, i.e., bandit algorithms that utilize collaborative filtering techniques to improve sample efficiency in online interactive recommendation, has attracted much research attention as it enjoys the best of both worlds. However, all existing collaborative bandit learning solutions impose a stationary assumption about the environment, i.e., both user preferences and the dependency among users are assumed static over time. Unfortunately, this assumption hardly holds in practice due to users' ever-changing interests and dependence relations, which inevitably costs a recommender system sub-optimal performance in practice. In this work, we develop a collaborative dynamic bandit solution to handle a changing environment for recommendation. We explicitly model the underlying changes in both user preferences and their dependency relation as a stochastic process. Individual user's preference is modeled by a mixture of globally shared contextual bandit models with a Dirichlet Process prior. Collaboration among users is thus achieved via Bayesian inference over the global bandit models. Model selection and arm selection for each user are done via Thompson sampling to balance exploitation and exploration. Our solution is proved to maintain a standard Õ(√(T)) sublinear regret even in such a challenging environment. And extensive empirical evaluations on both synthetic and real-world datasets further confirmed the necessity of modeling a changing environment and our algorithm's practical advantages against several state-of-the-art online learning solutions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2018

Learning Contextual Bandits in a Non-stationary Environment

Multi-armed bandit algorithms have become a reference solution for handl...
research
01/29/2021

Learning User Preferences in Non-Stationary Environments

Recommendation systems often use online collaborative filtering (CF) alg...
research
02/29/2020

Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests

A contextual bandit problem is studied in a highly non-stationary enviro...
research
08/25/2022

Dynamic collaborative filtering Thompson Sampling for cross-domain advertisements recommendation

Recently online advertisers utilize Recommender systems (RSs) for displa...
research
09/06/2022

Hierarchical Conversational Preference Elicitation with Bandit Feedback

The recent advances of conversational recommendations provide a promisin...
research
08/06/2016

On Context-Dependent Clustering of Bandits

We investigate a novel cluster-of-bandit algorithm CAB for collaborative...
research
11/07/2017

Deep density networks and uncertainty in recommender systems

Building robust online content recommendation systems requires learning ...

Please sign up or login with your details

Forgot password? Click here to reset