Continuous-Time Multi-Armed Bandits with Controlled Restarts

06/30/2020
by   Semih Cayci, et al.
0

Time-constrained decision processes have been ubiquitous in many fundamental applications in physics, biology and computer science. Recently, restart strategies have gained significant attention for boosting the efficiency of time-constrained processes by expediting the completion times. In this work, we investigate the bandit problem with controlled restarts for time-constrained decision processes, and develop provably good learning algorithms. In particular, we consider a bandit setting where each decision takes a random completion time, and yields a random and correlated reward at the end, with unknown values at the time of decision. The goal of the decision-maker is to maximize the expected total reward subject to a time constraint τ. As an additional control, we allow the decision-maker to interrupt an ongoing task and forgo its reward for a potentially more rewarding alternative. For this problem, we develop efficient online learning algorithms with O(log(τ)) and O(√(τlog(τ))) regret in a finite and continuous action space of restart strategies, respectively. We demonstrate an applicability of our algorithm by using it to boost the performance of SAT solvers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2022

Logarithmic regret bounds for continuous-time average-reward Markov decision processes

We consider reinforcement learning for continuous-time Markov decision p...
research
03/29/2022

On Kernelized Multi-Armed Bandits with Constraints

We study a stochastic bandit problem with a general unknown reward funct...
research
05/21/2018

Online Learning in Kernelized Markov Decision Processes

We consider online learning for minimizing regret in unknown, episodic M...
research
02/29/2020

Budget-Constrained Bandits over General Cost and Reward Distributions

We consider a budget-constrained bandit problem where each arm pull incu...
research
06/09/2021

A Lyapunov-Based Methodology for Constrained Optimization with Bandit Feedback

In a wide variety of applications including online advertising, contract...

Please sign up or login with your details

Forgot password? Click here to reset