The Externalities of Exploration and How Data Diversity Helps Exploitation

06/01/2018
by   Manish Raghavan, et al.
0

Online learning algorithms, widely used to power search and content optimization on the web, must balance exploration and exploitation, potentially sacrificing the experience of current users for information that will lead to better decisions in the future. Recently, concerns have been raised about whether the process of exploration could be viewed as unfair, placing too much burden on certain individuals or groups. Motivated by these concerns, we initiate the study of the externalities of exploration - the undesirable side effects that the presence of one party may impose on another - under the linear contextual bandits model. We introduce the notion of a group externality, measuring the extent to which the presence of one population of users impacts the rewards of another. We show that this impact can in some cases be negative, and that, in a certain sense, no algorithm can avoid it. We then study externalities at the individual level, interpreting the act of exploration as an externality imposed on the current user of a system by future users. This drives us to ask under what conditions inherent diversity in the data makes explicit exploration unnecessary. We build on a recent line of work on the smoothed analysis of the greedy algorithm that always chooses the action that currently looks optimal, improving on prior results to show that a greedy approach almost matches the best possible Bayesian regret rate of any other algorithm on the same problem instance whenever the diversity conditions hold, and that this regret is at most Õ(T^1/3). Returning to group-level effects, we show that under the same conditions, negative group externalities essentially vanish under the greedy algorithm. Together, our results uncover a sharp contrast between the high externalities that exist in the worst case, and the ability to remove all externalities if the data is sufficiently diverse.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2020

Greedy Algorithm almost Dominates in Smoothed Contextual Bandits

Online learning algorithms, widely used to power search and content opti...
research
02/26/2020

Structured Linear Contextual Bandits: A Sharp and Geometric Smoothed Analysis

Bandit learning algorithms typically involve the balance of exploration ...
research
02/09/2014

Recommandation mobile, sensible au contexte de contenus évolutifs: Contextuel-E-Greedy

We introduce in this paper an algorithm named Contextuel-E-Greedy that t...
research
03/28/2017

Diversity of preferences can increase collective welfare in sequential exploration problems

In search engines, online marketplaces and other human-computer interfac...
research
01/10/2018

A Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem

Bandit learning is characterized by the tension between long-term explor...
research
09/07/2019

AutoML for Contextual Bandits

Contextual Bandits is one of the widely popular techniques used in appli...
research
05/25/2022

Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret

We propose a new learning framework that captures the tiered structure o...

Please sign up or login with your details

Forgot password? Click here to reset