On Multi-Armed Bandit Designs for Phase I Clinical Trials

03/17/2019
by   Maryam Aziz, et al.
0

We study the problem of finding the optimal dosage in a phase I clinical trial through the multi-armed bandit lens. We advocate the use of the Thompson Sampling principle, a flexible algorithm that can accommodate different types of monotonicity assumptions on the toxicity and efficacy of the doses. For the simplest version of Thompson Sampling, based on a uniform prior distribution for each dose, we provide finite-time upper bounds on the number of sub-optimal dose selections, which is unprecedented for dose finding algorithms. Through a large simulation study, we then show that Thompson Sampling based on more sophisticated prior distributions outperform state-of-the-art dose identification algorithms in different types of phase I clinical trials.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/14/2022

Application of Multi-Armed Bandits to Model-assisted designs for Dose-Finding Clinical Trials

We consider applying multi-armed bandits to model-assisted designs for d...
research
06/07/2021

Multi-armed Bandit Requiring Monotone Arm Sequences

In many online learning or multi-armed bandit problems, the taken action...
research
07/11/2019

Online Learning to Estimate Warfarin Dose with Contextual Linear Bandits

Warfarin is one of the most commonly used oral blood anticoagulant agent...
research
05/19/2022

Adaptive Experiments and a Rigorous Framework for Type I Error Verification and Computational Experiment Design

This PhD thesis covers breakthroughs in several areas of adaptive experi...
research
01/04/2021

Etat de l'art sur l'application des bandits multi-bras

The Multi-armed bandit offer the advantage to learn and exploit the alre...
research
06/09/2020

Learning for Dose Allocation in Adaptive Clinical Trials with Safety Constraints

Phase I dose-finding trials are increasingly challenging as the relation...

Please sign up or login with your details

Forgot password? Click here to reset