Improving Fairness in Adaptive Social Exergames via Shapley Bandits

02/18/2023
by   Robert C. Gray, et al.
0

Algorithmic fairness is an essential requirement as AI becomes integrated in society. In the case of social applications where AI distributes resources, algorithms often must make decisions that will benefit a subset of users, sometimes repeatedly or exclusively, while attempting to maximize specific outcomes. How should we design such systems to serve users more fairly? This paper explores this question in the case where a group of users works toward a shared goal in a social exergame called Step Heroes. We identify adverse outcomes in traditional multi-armed bandits (MABs) and formalize the Greedy Bandit Problem. We then propose a solution based on a new type of fairness-aware multi-armed bandit, Shapley Bandits. It uses the Shapley Value for increasing overall player participation and intervention adherence rather than the maximization of total group output, which is traditionally achieved by favoring only high-performing participants. We evaluate our approach via a user study (n=46). Our results indicate that our Shapley Bandits effectively mediates the Greedy Bandit Problem and achieves better user retention and motivation across the participants.

READ FULL TEXT

page 4

page 6

page 7

research
12/13/2019

Fair Contextual Multi-Armed Bandits: Theory and Experiments

When an AI system interacts with multiple users, it frequently needs to ...
research
03/11/2018

Incentives in the Dark: Multi-armed Bandits for Evolving Users with Unknown Type

Design of incentives or recommendations to users is becoming more common...
research
08/17/2023

Equitable Restless Multi-Armed Bandits: A General Framework Inspired By Digital Health

Restless multi-armed bandits (RMABs) are a popular framework for algorit...
research
05/10/2021

Sense-Bandits: AI-based Adaptation of Sensing Thresholds for Heterogeneous-technology Coexistence Over Unlicensed Bands

In this paper, we present Sense-Bandits, an AI-based framework for distr...
research
06/04/2013

A Gang of Bandits

Multi-armed bandit problems are receiving a great deal of attention beca...
research
02/10/2021

Player Modeling via Multi-Armed Bandits

This paper focuses on building personalized player models solely from pl...
research
12/08/2020

A Multi-Armed Bandit-based Approach to Mobile Network Provider Selection

We argue for giving users the ability to lease bandwidth temporarily fro...

Please sign up or login with your details

Forgot password? Click here to reset