Stochastic Conservative Contextual Linear Bandits

03/29/2022
by   Jiabin Lin, et al.
0

Many physical systems have underlying safety considerations that require that the strategy deployed ensures the satisfaction of a set of constraints. Further, often we have only partial information on the state of the system. We study the problem of safe real-time decision making under uncertainty. In this paper, we formulate a conservative stochastic contextual bandit formulation for real-time decision making when an adversary chooses a distribution on the set of possible contexts and the learner is subject to certain safety/performance constraints. The learner observes only the context distribution and the exact context is unknown, and the goal is to develop an algorithm that selects a sequence of optimal actions to maximize the cumulative reward without violating the safety constraints at any time step. By leveraging the UCB algorithm for this setting, we propose a conservative linear UCB algorithm for stochastic bandits with context distribution. We prove an upper bound on the regret of the algorithm and show that it can be decomposed into three terms: (i) an upper bound for the regret of the standard linear UCB algorithm, (ii) a constant term (independent of time horizon) that accounts for the loss of being conservative in order to satisfy the safety constraint, and (ii) a constant term (independent of time horizon) that accounts for the loss for the contexts being unknown and only the distribution being known. To validate the performance of our approach we perform extensive simulations on synthetic data and on real-world maize data collected through the Genomes to Fields (G2F) initiative.

READ FULL TEXT
research
11/19/2016

Conservative Contextual Linear Bandits

Safety is a desirable property that can immensely increase the applicabi...
research
04/17/2021

Conservative Contextual Combinatorial Cascading Bandit

Conservative mechanism is a desirable property in decision-making proble...
research
09/30/2020

Stage-wise Conservative Linear Bandits

We study stage-wise conservative linear stochastic bandits: an instance ...
research
06/06/2019

Stochastic Bandits with Context Distributions

We introduce a novel stochastic contextual bandit model, where at each s...
research
07/28/2022

Distributed Stochastic Bandit Learning with Context Distributions

We study the problem of distributed stochastic multi-arm contextual band...
research
03/29/2023

Federated Stochastic Bandit Learning with Unobserved Context

We study the problem of federated stochastic multi-arm contextual bandit...
research
04/13/2020

Power-Constrained Bandits

Contextual bandits often provide simple and effective personalization in...

Please sign up or login with your details

Forgot password? Click here to reset