Learning and Optimization with Seasonal Patterns
Seasonality is a common form of non-stationary patterns in the business world. We study a decision maker who tries to learn the optimal decision over time when the environment is unknown and evolving with seasonality. We consider a multi-armed bandit (MAB) framework where the mean rewards are periodic. The unknown periods of the arms can be different and scale with the length of the horizon T polynomially. We propose a two-staged policy that combines Fourier analysis with a confidence-bound based learning procedure to learn the periods and minimize the regret. In stage one, the policy is able to correctly estimate the periods of all arms with high probability. In stage two, the policy explores mean rewards of arms in each phase using the periods estimated in stage one and exploits the optimal arm in the long run. We show that our policy achieves the rate of regret Õ(√(T∑_k=1^K T_k)), where K is the number of arms and T_k is the period of arm k. It matches the optimal rate of regret of the classic MAB problem O(√(TK)) if we regard each phase of an arm as a separate arm.
READ FULL TEXT