Achieving Fairness in the Stochastic Multi-armed Bandit Problem

07/23/2019
by   Vishakha Patil, et al.
0

We study an interesting variant of the stochastic multi-armed bandit problem, called the Fair-SMAB problem, where each arm is required to be pulled for at least a given fraction of the total available rounds. We investigate the interplay between learning and fairness in terms of a pre-specified vector denoting the fractions of guaranteed pulls. We define a fairness-aware regret, called r-Regret, that takes into account the above fairness constraints and naturally extends the conventional notion of regret. Our primary contribution is characterizing a class of Fair-SMAB algorithms by two parameters: the unfairness tolerance and the learning algorithm used as a black-box. We provide a fairness guarantee for this class that holds uniformly over time irrespective of the choice of the learning algorithm. In particular, when the learning algorithm is UCB1, we show that our algorithm achieves O( T) r-Regret. Finally, we evaluate the cost of fairness in terms of the conventional notion of regret.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset