Spoiled for Choice? Personalized Recommendation for Healthcare Decisions: A Multi-Armed Bandit Approach

09/13/2020
by   Tongxin Zhou, et al.
8

Online healthcare communities provide users with various healthcare interventions to promote healthy behavior and improve adherence. When faced with too many intervention choices, however, individuals may find it difficult to decide which option to take, especially when they lack the experience or knowledge to evaluate different options. The choice overload issue may negatively affect users' engagement in health management. In this study, we take a design-science perspective to propose a recommendation framework that helps users to select healthcare interventions. Taking into account that users' health behaviors can be highly dynamic and diverse, we propose a multi-armed bandit (MAB)-driven recommendation framework, which enables us to adaptively learn users' preference variations while promoting recommendation diversity in the meantime. To better adapt an MAB to the healthcare context, we synthesize two innovative model components based on prominent health theories. The first component is a deep-learning-based feature engineering procedure, which is designed to learn crucial recommendation contexts in regard to users' sequential health histories, health-management experiences, preferences, and intrinsic attributes of healthcare interventions. The second component is a diversity constraint, which structurally diversifies recommendations in different dimensions to provide users with well-rounded support. We apply our approach to an online weight management context and evaluate it rigorously through a series of experiments. Our results demonstrate that each of the design components is effective and that our recommendation design outperforms a wide range of state-of-the-art recommendation systems. Our study contributes to the research on the application of business intelligence and has implications for multiple stakeholders, including online healthcare platforms, policymakers, and users.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2021

Learn to Intervene: An Adaptive Learning Policy for Restless Bandits in Application to Preventive Healthcare

In many public health settings, it is important for patients to adhere t...
research
07/01/2021

The Use of Bandit Algorithms in Intelligent Interactive Recommender Systems

In today's business marketplace, many high-tech Internet enterprises con...
research
09/12/2022

"Some other poor soul's problems": a peer recommendation intervention for health-related social support

Online health communities (OHCs) offer the promise of connecting with su...
research
03/01/2017

Human Interaction with Recommendation Systems: On Bias and Exploration

Recommendation systems rely on historical user data to provide suggestio...
research
02/19/2019

Bayesian Exploration with Heterogeneous Agents

It is common in recommendation systems that users both consume and produ...
research
04/16/2023

A Field Test of Bandit Algorithms for Recommendations: Understanding the Validity of Assumptions on Human Preferences in Multi-armed Bandits

Personalized recommender systems suffuse modern life, shaping what media...
research
05/22/2023

Limited Resource Allocation in a Non-Markovian World: The Case of Maternal and Child Healthcare

The success of many healthcare programs depends on participants' adheren...

Please sign up or login with your details

Forgot password? Click here to reset