
ProblemComplexity Adaptive Model Selection for Stochastic Linear Bandits
We consider the problem of model selection for two popular stochastic li...
read it

Tractable contextual bandits beyond realizability
Tractable contextual bandit algorithms often rely on the realizability a...
read it

Rateadaptive model selection over a collection of blackbox contextual bandit algorithms
We consider the model selection task in the stochastic contextual bandit...
read it

Best of many worlds: Robust model selection for online supervised learning
We introduce algorithms for online, fullinformation prediction that are...
read it

Optimal Model Selection in Contextual Bandits with Many Classes via Offline Oracles
We study the problem of model selection for contextual bandits, in which...
read it

Model Selection with Near Optimal Rates for Reinforcement Learning with General Model Classes
We address the problem of model selection for the finite horizon episodi...
read it

Parameter and Feature Selection in Stochastic Linear Bandits
We study two model selection settings in stochastic linear bandits (LB)....
read it
Model Selection for Generic Contextual Bandits
We consider the problem of model selection for the general stochastic contextual bandits under the realizability assumption. We propose a successive refinement based algorithm called Adaptive Contextual Bandit (ACB), that works in phases and successively eliminates model classes that are too simple to fit the given instance. We prove that this algorithm is adaptive, i.e., the regret rate orderwise matches that of FALCON, the stateofart contextual bandit algorithm of Levi et. al '20, that needs knowledge of the true model class. The price of not knowing the correct model class is only an additive term contributing to the second order term in the regret bound. This cost possess the intuitive property that it becomes smaller as the model class becomes easier to identify, and viceversa. We then show that a much simpler explorethencommit (ETC) style algorithm also obtains a regret rate of matching that of FALCON, despite not knowing the true model class. However, the cost of model selection is higher in ETC as opposed to in ACB, as expected. Furthermore, ACB applied to the linear bandit setting with unknown sparsity, orderwise recovers the model selection guarantees previously established by algorithms tailored to the linear setting.
READ FULL TEXT
Comments
There are no comments yet.