
Stochastic Bandits with Linear Constraints
We study a constrained contextual linear bandit setting, where the goal ...
read it

Neural Contextual Bandits with Upper Confidence BoundBased Exploration
We study the stochastic contextual bandit problem, where the reward is g...
read it

FiniteTime Analysis of Kernelised Contextual Bandits
We tackle the problem of online reward maximisation over a large finite ...
read it

Contextual Bandits with Random Projection
Contextual bandits with linear payoffs, which are also known as linear b...
read it

Tractable contextual bandits beyond realizability
Tractable contextual bandit algorithms often rely on the realizability a...
read it

Contextual Recommendations and LowRegret CuttingPlane Algorithms
We consider the following variant of contextual linear bandits motivated...
read it

Semiparametric Contextual Bandits
This paper studies semiparametric contextual bandits, a generalization o...
read it
Efficient and Robust Algorithms for Adversarial Linear Contextual Bandits
We consider an adversarial variant of the classic Karmed linear contextual bandit problem where the sequence of loss functions associated with each arm are allowed to change without restriction over time. Under the assumption that the ddimensional contexts are generated i.i.d. at random from a known distributions, we develop computationally efficient algorithms based on the classic Exp3 algorithm. Our first algorithm, RealLinExp3, is shown to achieve a regret guarantee of O(√(KdT)) over T rounds, which matches the best available bound for this problem. Our second algorithm, RobustLinExp3, is shown to be robust to misspecification, in that it achieves a regret bound of O((Kd)^1/3T^2/3) + ε√(d) T if the true reward function is linear up to an additive nonlinear error uniformly bounded in absolute value by ε. To our knowledge, our performance guarantees constitute the very first results on this problem setting.
READ FULL TEXT
Comments
There are no comments yet.