Contextual Combinatorial Conservative Bandits

11/26/2019
by   Xiaojin Zhang, et al.
6

The problem of multi-armed bandits (MAB) asks to make sequential decisions while balancing between exploitation and exploration, and have been successfully applied to a wide range of practical scenarios. Various algorithms have been designed to achieve a high reward in a long term. However, its short-term performance might be rather low, which is injurious in risk sensitive applications. Building on previous work of conservative bandits, we bring up a framework of contextual combinatorial conservative bandits. An algorithm is presented and a regret bound of Õ(d^2+d√(T)) is proven, where d is the dimension of the feature vectors, and T is the total number of time steps. We further provide an algorithm as well as regret analysis for the case when the conservative reward is unknown. Experiments are conducted, and the results validate the effectiveness of our algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2019

Contextual Bandits Evolving Over Finite Time

Contextual bandits have the same exploration-exploitation trade-off as s...
research
12/14/2020

A One-Size-Fits-All Solution to Conservative Bandit Problems

In this paper, we study a family of conservative bandit problems (CBPs) ...
research
09/14/2020

Dual-Mandate Patrols: Multi-Armed Bandits for Green Security

Conservation efforts in green security domains to protect wildlife and f...
research
04/17/2021

Conservative Contextual Combinatorial Cascading Bandit

Conservative mechanism is a desirable property in decision-making proble...
research
06/22/2021

A Unified Framework for Conservative Exploration

We study bandits and reinforcement learning (RL) subject to a conservati...
research
06/02/2021

Addressing the Long-term Impact of ML Decisions via Policy Regret

Machine Learning (ML) increasingly informs the allocation of opportuniti...
research
05/30/2019

Rarely-switching linear bandits: optimization of causal effects for the real world

Exploring the effect of policies in many real world scenarios is difficu...

Please sign up or login with your details

Forgot password? Click here to reset