A Contextual-bandit-based Approach for Informed Decision-making in Clinical Trials

09/01/2018
by   Yogatheesan Varatharajah, et al.
0

Clinical trials involving multiple treatments utilize randomization of the treatment assignments to enable the evaluation of treatment efficacies in an unbiased manner. Such evaluation is performed in post hoc studies that usually use supervised-learning methods that rely on large amounts of data collected in a randomized fashion. That approach often proves to be suboptimal in that some participants may suffer and even die as a result of having not received the most appropriate treatments during the trial. Reinforcement-learning methods improve the situation by making it possible to learn the treatment efficacies dynamically during the course of the trial, and to adapt treatment assignments accordingly. Recent efforts using multi-arm bandits, a type of reinforcement-learning methods, have focused on maximizing clinical outcomes for a population that was assumed to be homogeneous. However, those approaches have failed to account for the variability among participants that is becoming increasingly evident as a result of recent clinical-trial-based studies. We present a contextual-bandit-based online treatment optimization algorithm that, in choosing treatments for new participants in the study, takes into account not only the maximization of the clinical outcomes but also the patient characteristics. We evaluated our algorithm using a real clinical trial dataset from the International Stroke Trial. The results of our retrospective analysis indicate that the proposed approach performs significantly better than either a random assignment of treatments (the current gold standard) or a multi-arm-bandit-based approach, providing substantial gains in the percentage of participants who are assigned the most suitable treatments. The contextual-bandit and multi-arm bandit approaches provide 72.63 gains, respectively, compared to a random assignment.

READ FULL TEXT
research
04/07/2021

Adaptive treatment allocation and selection in multi-arm clinical trials: a Bayesian perspective

Clinical trials are an instrument for making informed decisions based on...
research
03/24/2022

Making SMART decisions in prophylaxis and treatment studies

The optimal prophylaxis, and treatment if the prophylaxis fails, for a d...
research
11/11/2019

A Biologically Plausible Benchmark for Contextual Bandit Algorithms in Precision Oncology Using in vitro Data

Precision oncology, the genetic sequencing of tumors to identify druggab...
research
05/05/2022

A Deep Bayesian Bandits Approach for Anticancer Therapy: Exploration via Functional Prior

Learning personalized cancer treatment with machine learning holds great...
research
12/15/2017

Anscombe's Model for Sequential Clinical Trials Revisited

In Anscombe's classical model, the objective is to find the optimal sequ...
research
05/09/2022

Selectively Contextual Bandits

Contextual bandits are widely used in industrial personalization systems...
research
10/28/2022

SMART-EXAM: Incorporating Participants' Welfare into Sequential Multiple Assignment Randomized Trials

Dynamic treatment regimes (DTRs) are sequences of decision rules that re...

Please sign up or login with your details

Forgot password? Click here to reset