Hedging the Drift: Learning to Optimize under Non-Stationarity

03/04/2019
by   Wang Chi Cheung, et al.
0

We introduce general data-driven decision-making algorithms that achieve state-of-the-art dynamic regret bounds for non-stationary bandit settings. It captures applications such as advertisement allocation and dynamic pricing in changing environments. We show how the difficulty posed by the (unknown a priori and possibly adversarial) non-stationarity can be overcome by an unconventional marriage between stochastic and adversarial bandit learning algorithms. Our main contribution is a general algorithmic recipe that first converts the rate-optimal Upper-Confidence-Bound (UCB) algorithm for stationary bandit settings into a tuned Sliding Window-UCB algorithm with optimal dynamic regret for the corresponding non-stationary counterpart. Boosted by the novel bandit-over-bandit framework with automatic adaptation to the unknown changing environment, it even permits us to enjoy, in a (surprisingly) parameter-free manner, this optimal dynamic regret if the amount of non-stationarity is moderate to large or an improved (compared to existing literature) dynamic regret otherwise. In addition to the classical exploration-exploitation trade-off, our algorithms leverage the power of the "forgetting principle" in their online learning processes, which is vital in changing environments. We further conduct extensive numerical experiments on both synthetic data and the CPRM-12-001: On-Line Auto Lending dataset provided by the Center for Pricing and Revenue Management at Columbia University to show that our proposed algorithms achieve superior dynamic regret performances.

READ FULL TEXT
research
10/06/2018

Learning to Optimize under Non-Stationarity

We introduce algorithms that achieve state-of-the-art dynamic regret bou...
research
06/07/2019

Reinforcement Learning under Drift

We propose algorithms with state-of-the-art dynamic regret bounds for un...
research
02/23/2018

On Abruptly-Changing and Slowly-Varying Multiarmed Bandit Problems

We study the non-stationary stochastic multiarmed bandit (MAB) problem a...
research
02/26/2017

Kiefer Wolfowitz Algorithm is Asymptotically Optimal for a Class of Non-Stationary Bandit Problems

We consider the problem of designing an allocation rule or an "online le...
research
02/23/2018

An Algorithmic Framework to Control Bias in Bandit-based Personalization

Personalization is pervasive in the online space as it leads to higher e...
research
06/30/2020

Dynamic Regret of Policy Optimization in Non-stationary Environments

We consider reinforcement learning (RL) in episodic MDPs with adversaria...
research
02/28/2019

Meta Dynamic Pricing: Learning Across Experiments

We study the problem of learning across a sequence of price experiments ...

Please sign up or login with your details

Forgot password? Click here to reset