Fiduciary Bandits

05/16/2019
by   Gal Bahar, et al.
0

Recommendation systems often face exploration-exploitation tradeoffs: the system can only learn about the desirability of new options by recommending them to some user. Such systems can thus be modeled as multi-armed bandit settings; however, users are self-interested and cannot be made to follow recommendations. We ask whether exploration can nevertheless be performed in a way that scrupulously respects agents' interests---i.e., by a system that acts as a fiduciary. More formally, we introduce a model in which a recommendation system faces an exploration-exploitation tradeoff under the constraint that it can never recommend any action that it knows yields lower reward in expectation than an agent would achieve if it acted alone. Our main contribution is a positive result: an asymptotically optimal, incentive compatible, and ex-ante individually rational recommendation algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2022

Exploration, Exploitation, and Engagement in Multi-Armed Bandits with Abandonment

Multi-armed bandit (MAB) is a classic model for understanding the explor...
research
02/19/2019

Bayesian Exploration with Heterogeneous Agents

It is common in recommendation systems that users both consume and produ...
research
07/04/2018

Recommendation Systems and Self Motivated Users

Modern recommendation systems rely on the wisdom of the crowd to learn t...
research
03/01/2017

Human Interaction with Recommendation Systems: On Bias and Exploration

Recommendation systems rely on historical user data to provide suggestio...
research
09/26/2021

Deep Exploration for Recommendation Systems

We investigate the design of recommendation systems that can efficiently...
research
06/03/2023

Incentivizing Exploration with Linear Contexts and Combinatorial Actions

We advance the study of incentivized bandit exploration, in which arm ch...
research
12/02/2021

Recommending with Recommendations

Recommendation systems are a key modern application of machine learning,...

Please sign up or login with your details

Forgot password? Click here to reset