PAC-Battling Bandits with Plackett-Luce: Tradeoff between Sample Complexity and Subset Size

08/12/2018
by   Aditya Gopalan, et al.
0

We introduce the probably approximately correct (PAC) version of the problem of Battling-bandits with the Plackett-Luce (PL) model -- an online learning framework where in each trial, the learner chooses a subset of k < n arms from a pool of fixed set of n arms, and subsequently observes a stochastic feedback indicating preference information over the items in the chosen subset; e.g., the most preferred item or ranking of the top m most preferred items etc. The objective is to recover an `approximate-best' item of the underlying PL model with high probability. This framework is motivated by practical settings such as recommendation systems and information retrieval, where it is easier and more efficient to collect relative feedback for multiple arms at once. Our framework can be seen as a generalization of the well-studied PAC-Dueling-Bandit problem over set of n arms. We propose two different feedback models: just the winner information (WI), and ranking of top-m items (TR), for any 2< m < k. We show that with just the winner information (WI), one cannot recover the `approximate-best' item with sample complexity lesser than Ω( n/ϵ^21/δ), which is independent of k, and same as the one required for standard dueling bandit setting (k=2). However with top-m ranking (TR) feedback, our lower analysis proves an improved sample complexity guarantee of Ω( n/mϵ^21/δ), which shows a relative improvement of 1/m factor compared to WI feedback, rightfully justifying the additional information gain due to the knowledge of ranking of topmost m items. We also provide algorithms for each of the above feedback models, our theoretical analyses proves the optimality of their sample complexities which matches the derived lower bounds (upto logarithmic factors).

READ FULL TEXT
research
10/23/2018

Active Ranking with Subset-wise Preferences

We consider the problem of probably approximately correct (PAC) ranking ...
research
02/19/2020

Best-item Learning in Random Utility Models with Subset Choices

We consider the problem of PAC learning the most valuable item from a po...
research
03/01/2019

From PAC to Instance-Optimal Sample Complexity in the Plackett-Luce Model

We consider PAC learning for identifying a good item from subset-wise sa...
research
04/24/2017

Learning from Comparisons and Choices

When tracking user-specific online activities, each user's preference is...
research
10/24/2019

Online Boosting for Multilabel Ranking with Top-k Feedback

We present online boosting algorithms for multilabel ranking with top-k ...
research
06/23/2022

Efficient and Accurate Top-K Recovery from Choice Data

The intersection of learning to rank and choice modeling is an active ar...
research
09/03/2018

GuessTheKarma: A Game to Assess Social Rating Systems

Popularity systems, like Twitter retweets, Reddit upvotes, and Pinterest...

Please sign up or login with your details

Forgot password? Click here to reset