Bandits Under The Influence (Extended Version)

09/21/2020
by   Silviu Maniu, et al.
0

Recommender systems should adapt to user interests as the latter evolve. A prevalent cause for the evolution of user interests is the influence of their social circle. In general, when the interests are not known, online algorithms that explore the recommendation space while also exploiting observed preferences are preferable. We present online recommendation algorithms rooted in the linear multi-armed bandit literature. Our bandit algorithms are tailored precisely to recommendation scenarios where user interests evolve under social influence. In particular, we show that our adaptations of the classic LinREL and Thompson Sampling algorithms maintain the same asymptotic regret bounds as in the non-social case. We validate our approach experimentally using both synthetic and real datasets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/11/2021

Existence conditions for hidden feedback loops in online recommender systems

We explore a hidden feedback loops effect in online recommender systems....
07/01/2021

The Use of Bandit Algorithms in Intelligent Interactive Recommender Systems

In today's business marketplace, many high-tech Internet enterprises con...
08/16/2019

Accelerated learning from recommender systems using multi-armed bandit

Recommendation systems are a vital component of many online marketplaces...
07/31/2018

Graph-Based Recommendation System

In this work, we study recommendation systems modelled as contextual mul...
03/23/2018

Learning Recommendations While Influencing Interests

Personalized recommendation systems (RS) are extensively used in many se...
06/04/2013

A Gang of Bandits

Multi-armed bandit problems are receiving a great deal of attention beca...
07/27/2018

Task Recommendation in Crowdsourcing Based on Learning Preferences and Reliabilities

Workers participating in a crowdsourcing platform can have a wide range ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.