Safe Linear Leveling Bandits

12/13/2021
by   Ilker Demirel, et al.
0

Multi-armed bandits (MAB) are extensively studied in various settings where the objective is to maximize the actions' outcomes (i.e., rewards) over time. Since safety is crucial in many real-world problems, safe versions of MAB algorithms have also garnered considerable interest. In this work, we tackle a different critical task through the lens of linear stochastic bandits, where the aim is to keep the actions' outcomes close to a target level while respecting a two-sided safety constraint, which we call leveling. Such a task is prevalent in numerous domains. Many healthcare problems, for instance, require keeping a physiological variable in a range and preferably close to a target level. The radical change in our objective necessitates a new acquisition strategy, which is at the heart of a MAB algorithm. We propose SALE-LTS: Safe Leveling via Linear Thompson Sampling algorithm, with a novel acquisition strategy to accommodate our task and show that it achieves sublinear regret with the same time and dimension dependence as previous works on the classical reward maximization problem absent any safety constraint. We demonstrate and discuss our algorithm's empirical performance in detail via thorough experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2022

A Doubly Optimistic Strategy for Safe Linear Bandits

We propose a doubly optimistic strategy for the safe-linear-bandit probl...
research
08/16/2019

Linear Stochastic Bandits Under Safety Constraints

Bandit algorithms have various application in safety-critical systems, w...
research
04/20/2020

Thompson Sampling for Linearly Constrained Bandits

We address multi-armed bandits (MAB) where the objective is to maximize ...
research
11/19/2016

Conservative Contextual Linear Bandits

Safety is a desirable property that can immensely increase the applicabi...
research
11/22/2018

Bandits with Temporal Stochastic Constraints

We study the effect of impairment on stochastic multi-armed bandits and ...
research
11/21/2019

Safe Linear Stochastic Bandits

We introduce the safe linear stochastic bandit framework—a generalizatio...
research
02/20/2020

From Stateless to Stateful Priorities: Technical Report

We present the notion of stateful priorities for imposing precise restri...

Please sign up or login with your details

Forgot password? Click here to reset