Learning Contextual Bandits in a Non-stationary Environment

05/23/2018 ∙ by Qingyun Wu, et al. ∙ 0

Multi-armed bandit algorithms have become a reference solution for handling the explore/exploit dilemma in recommender systems, and many other important real-world problems, such as display advertisement. However, such algorithms usually assume a stationary reward distribution, which hardly holds in practice as users' preferences are dynamic. This inevitably costs a recommender system consistent suboptimal performance. In this paper, we consider the situation where the underlying distribution of reward remains unchanged over (possibly short) epochs and shifts at unknown time instants. In accordance, we propose a contextual bandit algorithm that detects possible changes of environment based on its reward estimation confidence and updates its arm selection strategy respectively. Rigorous upper regret bound analysis of the proposed algorithm demonstrates its learning effectiveness in such a non-trivial environment. Extensive empirical evaluations on both synthetic and real-world datasets for recommendation confirm its practical utility in a changing environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Multi-armed bandit algorithms provide a principled solution to the explore/exploit dilemma (Auer et al., 2002; Gittins, 1979; Auer, 2002), which exists in many important real-world applications such as display advertisement (Li et al., 2010), recommender systems (Li et al., 2010), and online learning to rank (Yue and Joachims, 2009). Intuitively, bandit algorithms adaptively designate a small amount of traffic to collect user feedback in each round while improving their model estimation quality on the fly. In recent years, contextual bandit algorithms (Li et al., 2010; Langford and Zhang, 2008; Filippi et al., 2010) have gained increasing attention due to their capability of leveraging contextual information to deliver better personalized online services. They assume the expected reward of each action is determined by a conjecture of unknown bandit parameters and given context, which give them advantages when the space of recommendation is large but the rewards are interrelated.

Most existing stochastic contextual bandit algorithms assume a fixed yet unknown reward mapping function (Li et al., 2010; Filippi et al., 2010; Li, Karatzoglou, and Gentile, Li et al.; Wu et al., 2016; Gentile et al., 2017). In practice, this translates to the assumption that users’ preferences remain static over time. However, this assumption rarely holds in reality as users’ preferences can be influenced by various internal or external factors (Cialdini and Trost, 1998). For example, when a sports season ends after a championship, seasonal fans might jump over to following a different sport and not have much interest in the off-season. More importantly, such changes are often not observable to the learners. If a learning algorithm fails to model or recognize the possible changes of the environment, it would constantly make suboptimal choices, e.g., keep making out-of-date recommendations to users.

In this work, moving beyond a restrictive stationary environment assumption, we study a more sophisticate but realistic environment setting where the reward mapping function becomes stochastic over time. More specifically, we focus on the setting where there are abrupt changes in terms of user preferences (e.g., user interest in a recommender system) and those changes are not observable to the learner beforehand. Between consecutive change points, the reward distribution remains stationary yet unknown, i.e., piecewise stationary. Under such a non-stationary environment assumption, we propose a two-level hierarchical bandit algorithm, which automatically detects and adapts to changes in the environment by maintaining a suite of contextual bandit models during identified stationary periods based on its interactions with the environment.

At the lower level of our hierarchical bandit algorithm, a set of contextual bandit models, referred to as slave bandits, are maintained to estimate the reward distribution in the current environment (i.e., a particular user) since the last detected change point. At the upper level, a master bandit model monitors the ‘badness’ of each slave bandit by examining whether its reward prediction error exceeds its confidence bound. If the environment has not changed, i.e., being stationary since the last change, the probability of observing a large residual from a bandit model learned from that environment is bounded

(Filippi et al., 2010; Abbasi-yadkori et al., 2011). Thus the ‘badness’ of slave bandit models reflects possible changes of the environment. Once a change is detected with high confidence, the master bandit discards the out-of-date slave bandits and creates new ones to fit the new environment. Consequentially, the active slave bandit models form an admissible arm set for the master bandit to choose from. At each time, the master bandit algorithm chooses one of the active slave bandits to interact with the user, based on its estimated ‘badness’, and distributes user feedback to all active slave bandit models attached with this user for model updating. The master bandit model maintains its estimation confidence of the ‘badness’ of those slave bandits so as to recognize the out-of-date ones as early as possible.

We rigorously prove the upper regret bound of our non-stationary contextual bandit algorithm is , in which is the total number of ground-truth environment changes up to time and is the longest stationary period till time . This arguably is the lowest upper regret bound any bandit algorithm can achieve in such a non-stationary environment without further assumptions. Specifically, the best one can do in such an environment is to discard the old model and estimate a new one at each true change point, as no assumption about the change should be made. This leads to the same upper regret bound achieved in our algorithm. However, as the change points are unknown to the algorithm ahead of time, any early or late detection of the changes can only result in an increased regret. More importantly, we prove that if an algorithm fails to model the changes a linear regret is inevitable. Extensive empirical evaluations on both a synthetic dataset and three real-world datasets for content recommendation confirmed the improved utility of the proposed algorithm, compared with both state-of-the-art stationary and non-stationary bandit algorithms.

2. Related Work

Multi-armed bandit algorithms (Auer et al., 2002, 1995; Li et al., 2010; Filippi et al., 2010; Li, Karatzoglou, and Gentile, Li et al.; Gentile et al., 2017) have been extensively studied in literature. However, most of the stochastic bandit algorithms assume the reward pertaining to an arm is determined by an unknown but fixed reward distribution or a context mapping function. This limits the algorithms to a stationary environment assumption, which is restrictive considering the non-stationary nature of many real-world applications of bandit algorithms.

There are some existing works studying the non-stationary bandit problems. A typical non-stationary environment setting is the abruptly changing environment, or piecewise stationary environment, in which the environment undergoes abrupt changes at unknown time points but remains stationary between two consecutive change points. To deal with such an environment, Hartland et al. (Hartland et al., 2006) proposed the Restart algorithm, in which a discount factor is introduced to exponentially decay the effect of past observations. Garivier and Moulines (Garivier and Moulines, Garivier and Moulines) proposed a discounted-UCB algorithm, which is similar to the Restart algorithm in discounting the historical observations. They also proposed a sliding window UCB algorithm, where only observations inside a sliding window are used to update the bandit model. Yu and Mannor (Yu and Mannor, 2009) proposed a windowed mean-shift detection algorithm to detect the potential abrupt changes in the environment. An upper regret bound of is proved for the proposed algorithm, in which is the number of ground-truth changes up to time . However, they assume that at each iteration, the agent can query a subset of arms for additional observations. Slivkins and Upfal (Slivkins and Upfal, 2008) considered a continuously changing environment, in which the expected reward of each arm follows Brownian motion. They proposed a UCB-like algorithm, which considers the volatility of each arm in such an environment. The algorithm restarts in a predefined schedule to account for the change of reward distribution.

Most existing solutions for non-stationary bandit problems focus on context-free scenarios, which cannot utilize the available contextual information for reward modeling. Ghosh et al. proposed an algorithm in (Ghosh et al., 2017) to deal with environment misspecification in contextual bandit problems. Their algorithm comprises a hypothesis test for linearity followed by a decision to use either the learnt linear contextual bandit model or a context-free bandit model. But this algorithm still assumes a stationary environment, i.e., neither the ground-truth linear model nor unknown models are changing over time. Liu et al. (Fang Liu and Shroff, 2018) proposed to use cumulative sum and Page-Hinkley Test to detect sudden changes in the environment. An upper regret bound of is proved for one of their proposed algorithms. However, this work is limited to a simplified Bernoulli bandit environment. Recently, Luo et al (Luo et al., 2017) studied the non-stationary bandit problem and proposed several bandit algorithms with statistical tests to adapt to changes in the environment. They analyzed various notions of regret including interval regret, switching regret, and dynamic regret. Hariri et al. (Hariri, Mobasher, and Burke, Hariri et al.)

proposed a contextual Thompson sampling algorithm with a change detection module, which involves iteratively applying a combination of cumulative sum charts and bootstrapping to capture potential changes of user preference in interactive recommendation. But no theoretical analysis is provided about this proposed algorithm.

3. Methodology

We develop a contextual bandit algorithm for a non-stationary environment, where the algorithm automatically detects the changes in the environment and maintains a suite of contextual bandit models for each detected stationary period. In the following discussions, we will first describe the notations and our assumptions about the non-stationary environment, then carefully illustrate our developed algorithm and corresponding regret analysis.

3.1. Problem Setting and Formulation

In a multi-armed bandit problem, a learner takes turns to interact with the environment, such as a user or a group of users in a recommender system, with a goal of maximizing its accumulated reward collected from the environment over time . At round , the learner makes a choice among a finite, but possibly large, number of arms, i.e., , and gets the corresponding reward , such as a user clicks on a recommended item. In a contextual bandit setting, each arm

is associated with a feature vector

( without loss of generality) summarizing the side-information about it at a particular time point. The reward of each arm is assumed to be governed by a conjecture of unknown bandit parameter ( without loss of generality), which characterizes the environment. This can be specified by a reward mapping function : . In a stationary environment, is constant over time.

In a non-stationary environment, the reward distribution over arms varies over time because of the changes in the environment’s bandit parameter . In this paper, we consider abrupt changes in the environment (Garivier and Moulines, Garivier and Moulines; Hariri, Mobasher, and Burke, Hariri et al.; Hartland et al., 2006), i.e., the ground-truth parameter changes arbitrarily at arbitrary time, but remains constant between any two consecutive change points:

where the change points of the underlying reward distribution and the corresponding bandit parameters are unknown to the learner. We only assume there are at most change points in the environment up to time , with .

To simplify the discussion, linear structure in is postulated, but it can be readily extended to more complicated dependency structures, such as generalized linear models (Filippi et al., 2010), without changing the design of our algorithm. Specifically, we have,

(1)

in which is Gaussian noise drawn from , and the superscript in means it is the ground-truth bandit parameter in the environment. In addition, we impose the following assumption about the non-stationary environment, which guarantees the detectability of the changes, and reflects our insight how to detect them on the fly,

Assumption 1 ().

For any two consecutive change points and in the environment, there exists , such that when at least () portion of all the arms satisfy,

(2)
Remark 1 ().

The above assumption is general and mild to satisfy in many practical scenarios, since it only requires a portion of the arms to have recognizable change in their expected rewards. For example, a user may change his/her preference in sports news but not in political news. The arms that do not satisfy Eq (2) can be considered as having small deviations in the generic reward assumption made in Eq (1). We will later prove our bandit solution remains its regret scaling in the presence of such small deviation.

3.2. Dynamic Linear UCB

Based on the above assumption about a non-stationary environment, in any stationary period between two consecutive change points, the reward estimation error of a contextual bandit model trained on the observations collected from that period should be bounded with a high probability (Abbasi-yadkori et al., 2011; Chu et al., 2011)

. Otherwise, the model’s consistent wrong predictions can only come from the change of environment. Based on this insight, we can evaluate whether the stationary assumption holds by monitoring a bandit model’s reward prediction quality over time. To reduce variance in the prediction error from one bandit model, we ensemble a set of models, by creating and abandoning them on the fly.

Specifically, we propose a hierarchical bandit algorithm, in which a master multi-armed bandit model operates over a set of slave contextual bandit models to interact with the changing environment. The master model monitors the slave models’ reward estimation error over time, which is referred to as ‘badness’ in this paper, to evaluate whether a slave model is admissible for the current environment. Based on the estimated ‘badness’ of each slave model, the master model dynamically discards out-of-date slave models or creates new ones. At each round , the master model selects a slave model with the smallest lower confidence bound (LCB) of ‘badness’ to interact with the environment, i.e., the most promising slave model. The obtained observation is shared across all admissible slave models to update their model parameters. The process is illustrated in Figure 1.

Figure 1. Illustration of dLinUCB. The master bandit model maintains the ‘badness’ estimation of slave models over time to detect changes in the environment. At each round, the most promising slave model is chosen to interact with the environment; and the acquired feedback is shared across all admissible slave models for model update.

Any contextual bandit algorithm (Li et al., 2010; Filippi et al., 2010; Li, Karatzoglou, and Gentile, Li et al.; Wu et al., 2016) can serve as our slave model. Due to the simplified linear reward assumption made in Eq (1), we choose LinUCB (Li et al., 2010) for the purpose in this paper; but our proposed algorithm can be readily adapted to any other choices of the slave model. This claim is also supported by our later regret analysis. As a result, we name our algorithm as Dynamic Linear Bandit with Upper Confidence Bound, or dLinUCB in short.

In the following, we first briefly describe our chosen slave model LinUCB. Then we formally define the concept of ‘badness’, based on which we design the strategy for creating and discarding slave bandit models. Lastly, we explain how dLinUCB selects the most promising slave model from the admissible model set. The detailed description of dLinUCB is provided in Algorithm 1.

Slave bandit model: LinUCB. Each slave LinUCB model maintains all historical observations that the master model has assigned to it. Based on the assigned observations, a slave model gets an estimate of user preference (Li et al., 2010), in which , is a identity matrix, is the coefficient for 2 regularization; , and is an index set recording when the observations are assigned to the slave model up to time . According to (Abbasi-yadkori et al., 2011), with a high probability the expected reward estimation error of model is upper bounded: , in which . Based on the upper confidence bound principle (Auer et al., 2002), a slave model takes an action using the following arm selection strategy (i.e., line 6 in Algorithm 1):

(3)
1:Inputs: , , ,
2:Initialize: Maintain a set of slave models with , initialize : , , ; and initialize the ‘badness’ statistics of it: ,
3:for  to  do
4:     Choose a slave model from the active slave model set
5:     Observe candidate arm pool , with for
6:     Take action , in which is defined in Eq (3)
7:     Observe payoff
8:     Set CreatNewFlag = True
9:     for  do
10:          , where and
11:         if  then
12:              Update slave model: , ,
13:         end if
14:         , where is when was created
15:         Update ‘badness’ ,
16:         if   then
17:              Set CreatNewFlag = False
18:         else if  then
19:              Discard slave model :
20:         end if
21:     end for
22:     if CreateNewFlag or  then
23:         Create a new slave model :
24:         Initialize : , ,
25:         Initialize ‘badness’ statistics of : ,
26:     end if
27:end for
Algorithm 1 Dynamic Linear UCB (dLinUCB)

Slave model creation and abandonment. For each slave bandit model

, we define a binary random variable

to indicate whether the slave model ’s prediction error at time exceeds its confidence bound,

(4)

where and is the inverse of Gauss error function. represents the high probability bound of Gaussian noise in the received feedback.

According to Eq (7) in Theorem 3.1, if the environment stays stationary since the slave model has been created, we have , where is a hyper-parameter in . Therefore, if we observe a sequence of consistent prediction errors from the slave model , it strongly suggests a change of environment, so that this slave model should be abandoned from the admissible set. Moreover, we introduce a size- sliding window to only accumulate the most recent observations when estimating the expected error in slave model . The benefit of sliding window design will be discussed with more details later in Section 3.3.

We define , which estimates the ‘badness’ of slave model within the most recent period to time , i.e., , in which is when model was created. Combining the concentration inequality in Theorem 7.2 (provided in the appendix), we have the assertion that if in the period the stationary hypothesis is true, for any given and , with a probability at least , the expected ‘badness’ of slave model satisfies,

(5)

Eq (5) provides a tight bound to detect changes in the environment. If the environment is unchanged, within a sliding window the estimation error made by an up-to-date slave model should not exceed the right-hand side of Eq (5) with a high probability. Otherwise, the stationary hypothesis has to be rejected and thus the slave model should be discarded. Accordingly, if none of the slave models in the admissible bandit set satisfy this condition, a new slave bandit model should be created for this new environment. Specifically, the master bandit model controls the slave model creation and abandonment in the following way.

Model abandonment: when the slave model ’s estimated ‘badness’ exceeds its upper confidence bound defined in Eq (5), i.e., , it will be discarded and removed from the admissible slave model set. This corresponds to line 18-20 in Algorithm 1.

Model creation: When no slave model’s estimated ‘badness’ is within its expected confidence bound, i.e., no slave model satisfies , a new slave model will be created. is a parameter to control the sensitivity of dLinUCB, which affects the number of maintained slave models. When , the threshold of creating and abandoning a slave model matches and the algorithm only maintains one admissible slave model. When multiple slave models will be maintained. The intuition is that an environment change is very likely to happen when all active slave models face a high risk of being out-of-date (although they have not been abandoned yet). This corresponds to line 8, 16-17, and 22-26 in Algorithm 1.

Slave model selection and update. At each round, the master bandit model selects one active slave bandit model to interact with the environment, and updates all active slave models with the acquired feedback accordingly. As we mentioned before, with the model abandonment mechanism every active slave model is guaranteed to be admissible for taking acceptable actions; but they are associated with different levels of risk of being out of date. A well-designed model selection strategy can further reduce the overall regret, by minimizing this risk. Intuitively, when facing a changing environment, one should prefer a slave model with the lowest empirical error in the most recent period.

The uncertainty in assessing each slave model’s ‘badness’ introduces another explore-exploit dilemma, when choosing the active slave models. Essentially, we prefer a slave model of lower ‘badness’ with a higher confidence. We realize this criterion by selecting a slave model according to its Lower Confidence Bound (LCB) of the estimated ‘badness.’ This corresponds to line 4 in Algorithm 1.

Once the feedback is obtained from the environment on the selected arm , the master algorithm can not only update the selected slave model but also all other active ones for both of their ‘badness’ estimation and model parameters (line 11-13 and line 15 in Algorithm 1 accordingly). This would reduce the sample complexity in each slave model’s estimation. However, at this stage, it is important to differentiate those “about to be out-of-date” models from the “up-to-date” ones, as any unnecessary model update blurs the boundary between them. As a result, we only update the perfect slave models, i.e., those whose ‘badness’ is still zero at this round of interaction; and later we will prove this updating strategy is helpful to decrease the chance of late detection.

3.3. Regret Analysis

In this section, we provide a detailed regret analysis of our proposed dLinUCB algorithm. We focus on the accumulated pseudo regret, which is formally defined as,

(6)

where is the best arm to select according to the oracle of this problem, and is the arm selected by the algorithm to be evaluated.

It is easy to prove that if a bandit algorithm does not model the change of environment, it would suffer from a linearly increasing regret: An optimal arm in the previous stationary period may become sub-optimal after the change; but the algorithm that does not model environment change will constantly choose this sub-optimal arm until its estimated reward falls behind the other arms’. This leads to a linearly increasing regret in each new stationary period.

Next, we first characterize the confidence bound of reward estimation in a linear bandit model in Theorem 3.1. Then we prove the upper regret bound of two variants of our dLinUCB algorithm in Theorem 3.2 and Theorem 3.5. More detailed proofs are provided in the appendix.

Theorem 3.1 ().

For a linear bandit model specified in Algorithm 1, if the underlying environment is stationary, for any we have the following inequality with probability at least ,

(7)

where with , ,

is the standard deviation of the Gaussian noise in reward feedback, and

is the Gauss error function.

Denote as the upper regret bound of a linear bandit model within a stationary period . Based on Theorem 3.1, one can prove that (Abbasi-yadkori et al., 2011). In the following, we provide an upper regret bound analysis for the basic version of dLinUCB, in which the size of the admissible slave models is restricted to one (i.e., by setting ).

Theorem 3.2 ().

When Assumption 1 is satisfied with , if and in Algorithm 1 are set according to Lemma 3.4, and is set to , with probability at least , the accumulated regret of dLinUCB satisfies,

(8)

where is the length of the longest stationary period up to .

Proof.

Step 1: If change points can be perfectly detected, the regret of dLinUCB can be bounded by . However, additional regret may accumulate if early or late detection happens. In the following two steps, we will bound the possible additional regret from early detection, denoted as , and that from late detection, denoted as .

Step 2: Define as the number of early detection within this stationary period , with . Define as the probability of early detection in the stationary period, we have . According to Lemma 3.3, we have

. Combining the property of binomial distribution

and Chebyshev’s concentration inequality, we have with probability at least . Hence, with a probability , we have . Considering the calculation of , when we have with a probability at least . This upper bounds the additional regret from any possible early detection, and maintains it in the same order as the slave model’s.

Step 3. Define as the number of interactions where the environment has changed (comparing to ) but the change is not detected by the algorithm. The additional regret from this late detection can be bounded by (i.e., the maximum regret in each round of interaction). Define as the probability of detection after the change happens, we have

, i.e., a Geometric distribution. According to Lemma

3.4, . Based on the property of Geometric distribution and Chebyshev’s inequality, we have with probability . If we consider the case where the change point locates inside the sliding window , we may have at most another delay after each change point. Therefore, the additional regret from late detection can be bounded by , which is not directly related to the length of any stationary period.

Combining the above three steps concludes the proof. ∎

Lemma 3.3 (Bound the probability of early detection).

For and any slave model in Algorithm 1,

The intuition behind Lemma 3.3 is that when the environment is stationary, the ‘badness’ of a slave model should be small and bounded according to Eq (5).

Lemma 3.4 (Bound the probability of late detection).

When the magnitude of change in the environment in Assumption 1 satisfies , and the shortest stationary period length satisfies , for any , if and in Algorithm 1 are set to and , for any slave model in Algorithm 1, we have,

The intuition behind Lemma 3.4 is that when the environment has changed, with a high probability that Eq (7) will not be satisfied in an out-of-date slave model. It means that we will accumulate larger badness from this slave model. In both Lemma 3.3 and 3.4, is a parameter controlling the confidence of the ‘badenss’ estimation in Chernoff Bound; and therefore an input to the algorithm.

Remark 2 (How the environment assumption affects dLinUCB).

1. The magnitude of environment change affects whether a change is detectable by our algorithm. However, we need to emphasize that when is very small, the additional regret from re-using an out-of-date slave model is also small. In this case, a similar scale of regret bound can still be achieved, which will be briefly proved in Appendix and empirically studied in Section 4.1. 2. We require the shortest stationary period length , which guarantees there are enough observations accumulated in a slave model to make an informed model selection. 3. The portion of changed arms will affect the probability of achieving our derived regret bound, as we require . also interacts with and : when is small, more observations are needed for a slave model to detect the changes. The effect of and will also be studied in our empirical evaluations.

Theorem 3.2 indicates with our model update and abandonment mechanism, each slave model in dLinUCB is ‘admissible’ in terms of upper regret bound. In the following, we further prove that maintaining multiple slave models and selecting them according to their LCB of ‘badness’ can further improve the regret bound.

Theorem 3.5 ().

Under the same condition as specified in Theorem 3.2, with probability at least , the expected accumulated regret of dLinUCB up to time can be bounded by,

(9)

in which is the best slave model among all the active ones in the stationary period according to the oracle, and is difference between the accumulated expected reward from the selected model and that from in the period .

Proof.

Define the optimal expected cumulative reward in the stationary period according to the oracle as and the expected accumulative reward in dLinUCB as . is the expected cumulative reward from . The accumulated regret of dLinUCB can be written as,

(10)

The first term of Eq(10) can be bounded based on Theorem 3.2. Define as the number of times a slave model is selected when it is not the best in : , we have . In Lemma 3.6, we provide the bound of . Substituting the above conclusions into Eq (10) finishes the proof. ∎

Lemma 3.6 ().

The model selection strategy in Algorithm 1 guarantees,

(11)
Remark 3 (regret comparison of dLinUCB with one slave model and multiple slave models).

By maintaining multiple admissible slave models and selecting one according to the LCB of ‘badness’ when interacting with the environment, dLinUCB achieves a regret reduction in the first part of Eq (9). Although there is additional regret introduced by switching between the best model and the chosen model , this added regret increases much slower than that resulted from any slave model (i.e., v.s., ); and thus maintaining multiple slave models is always beneficial. Besides, the order of upper regret bound of dLinUCB in both cases is , which is the best upper regret bound a bandit algorithm can achieve in such a non-stationary environment (Garivier and Moulines, Garivier and Moulines), and it matches the lower bound up to a factor.

Remark 4 (Generalization of dLinUCB).

Our theoretical analysis confirms that any contextual bandit algorithm can be used as the slave model in dLinUCB, as long as the its reward estimation error is bounded with a high probability, which corresponds in Eq (7). The overall regret of dLinUCB will only be a factor of the actual number of changes in the environment, which is arguably inevitable without further assumptions about the environment.

4. Evaluations

(, , ) (0.1, 0.9, 800) (0.05, 0.9, 800) (0.01, 0.9, 800) (0.01, 0.5, 800) (0.01, 0.1, 800) (0.01, 0.9, 400)
dLinUCB 87.46 3.61 65.94 2.30 54.07 3.95 44.94 2.90 46.12 4.63 111.72 4.87
 adTS 360.75 39.59 249.63 27.26 207.95 22.28 189.0718.39 177.55.20.36 412.55 14.53
LinUCB 436.84 40.23 386.1021.88 347.19 14.95 264.87 21.53 226.87 32.15 405.82 33.38
Meta-Bandit 1822.31 80.67 1340.0129.94 1354.03 22.29 1329.51 18.93 1402.63 24.85 1388.81 115.91
WMDUCB1 2219.36 142.16 1652.99 21.33 1635.35 73.96 1464.11 89.16 1506.55 41.52 1691.75 48.09
Table 1. Accumulated regret with different noise level , environment change and stationary period length .

We performed extensive empirical evaluations of dLinUCB against several related baseline algorithms, including: 1) the state-of-the-art contextual bandit algorithm LinUCB (Li et al., 2010); 2) adaptive Thompson Sampling algorithm (Hariri, Mobasher, and Burke, Hariri et al.) (named as adTS) which has a change detection module; 3) windowed mean-shift detection algorithm (Yu and Mannor, 2009) (named as WMDUCB1), which is a UCB1-type algorithm with a change detection module ; and 4) Meta-Bandit algorithm (Hartland et al., 2006), which switches between two UCB1 models.

4.1. Experiments on synthetic datasets

In simulation, we generate a size- () arm pool , in which each arm is associated with a -dimensional feature vector with . Similarly, we create the ground-truth bandit parameter with , which is not disclosed to the learners. Each dimension of and

is drawn from a uniform distribution

. At each round , only a subset of arms in are disclosed to the learner for selection, e.g., randomly sample 10 arms from without replacement. The ground-truth reward is corrupted by Gaussian noise before being fed back to the learner. The standard deviation of Gaussian noise is set to 0.05 by default. To make the comparison fair, at each round , the same set of arms are presented to all the algorithms being evaluated. To simulate an abruptly changing environment, after every rounds, we randomize with respect to the constraint for proportion of arms in . We set to 0.1, to 800 and to by default.

Under this simulation setting, all algorithms are executed to 5000 iterations and the parameter in dLinUCB is set to 200. Accumulated regret defined in Eq (6) is used to evaluate different algorithms and is reported in Figure 2. The bad performance of LinUCB illustrates the necessity of modeling the non-stationarity of the environment – its regret only converges in the first stationary period, and it suffers from an almost linearly increasing regret, which is expected according to our theoretical analysis in Section 3.3. adTS is able to detect and react to the changes in the environment, but it is slow in doing so and therefore suffers from a linear regret at the beginning of each stationary period before converging. dLinUCB, on the other hand, can quickly identify the changes and create corresponding slave models to capture the new reward distributions, which makes the regret of dLinUCB converge much faster in each detected stationary period. In Figure 2 we use the black and blue vertical lines to indicate the actual change points and the detected ones by dLinUCB respectively. It is clear that dLinUCB detects the changes almost immediately every time. WMDUCB1 and Meta-Bandit are also compared, but since they are context-free bandits, they performed much worse than the above contextual bandits. To improve visibility of the result, we exclude them from Figure 2 and instead report their performance in Table 1.

As proved in our regret analysis, dLinUCB’s performance depends the magnitude of change between two consecutive stationary periods, the Gaussian noise in the feedback, and the length of stationary period. In order to investigate how these factors affect dLinUCB, we varied these three factors in simulation. We ran all the algorithms for 10 times and report the mean and standard deviation of obtained regret in Table 1. In all of our environment settings, dLinUCB consistently achieved the best performance against all baselines. In particular, we can notice that the length of stationary period plays an important role in affecting dLinUCB’s regret (and also in adTS). This is expected from our regret analysis: since is fixed, a smaller leads to a larger , which linearly scales dLinUCB’s regret in Eq (8) and (9). A smaller noise level leads to reduced regret in dLinUCB, as it makes the change detection easier. Last but not least, the magnitude of change does not affect dLinUCB: when is large, the change is easy to detect; when is small, the difference between two consecutive reward distributions is small, and thus the added regret from an out-of-date slave model is also small. Again the context-free algorithms WMDUCB1 and Meta-Bandit performed much worse than those contextual bandit algorithms in all the experiments.

In addition, we also studied the effect of in dLinUCB by varying from 0.0 to 1.0. dLinUCB achieved the lowest regret when , since the environment becomes stationary. When : dLinUCB achieves the best regret (with regret of 54.07 3.95) when , however as becomes smaller the regret is not affected too much (with regret of 57.59 3.44). These results further validate our theoretical regret analysis and unveil the nature of dLinUCB in a piecewise stationary environment.

Figure 2. Results from simulation.

4.2. Experiments on Yahoo! Today Module

(a) Bandit models on the user side (b) Bandit models on the article side (c) Detected changes on sample articles
Figure 3. Performance comparison in Yahoo! Today Module.

We compared all the algorithms on the large-scale clickstream dataset made available by the Yahoo Webscope program. This dataset contains 45,811,883 user visits to Yahoo Today Module in a ten-day period in May 2009. For each visit, both the user and each of the 10 candidate articles are associated with a feature vector of six dimensions (including a constant bias term) (Li et al., 2010). In the news recommendation problem, it is generally believed that users’ interests on news articles change over time; and it is confirmed in this large-scale dataset by our quantitative analysis. To illustrate our observations, we randomly sampled 5 articles and reported their real-time click-through-rate (CTR) in Figure 3 (c), where each point is the average CTR over 2000 observations. Clearly, there are dramatic changes in those articles’ popularity over time. For example, article 1’s CTR kept decreasing after its debut, then increased in the next two days, and dropped eventually. Any recommendation algorithm failing to recognize such changes would suffer from a sub-optimal recommendation quality over time.

The unbiased offline evaluation protocol proposed in (Li et al., 2011) is used to compare different algorithms. CTR is used as the performance metric of all bandit algorithms. Following the same evaluation principle used in (Li et al., 2010), we normalized the resulting CTR from different algorithms by the corresponding logged random strategy’s CTR. We tested two different settings on this dataset based on where to place the bandit model for reward estimation.

The first setting is to build bandit models for users, i.e., attaching on the user side to learn users’ preferences over articles. We included a non-personalized variant and a personalized variant of all the contextual bandit algorithms. In the non-personalized variant, the bandit parameters are shared across all users, and thus the detected changes are synchronized across users. We name the resulting algorithms as uniform-LinUCB, uniform-adTS, and uniform-dLinUCB. In the personalized variant, each individual user is associated with an independent bandit parameter , and the change is only about him/herself. Since this dataset does not provide user identities, we followed (Wu et al., 2016) to cluster users into user groups and assume those in the same group share the same bandit parameter. We name the resulting algorithms as N-LinUCB, N-adTS and N-dLinUCB. To make the comparison more competitive, we also include a recently introduced collaborative bandit algorithm CLUB (Gentile et al., 2014), which combines collaborative filtering with bandit learning.

From Figure 3 (a), we can find that both the personalized and non-personalized variants of dLinUCB achieved significant improvement compared with all baselines. It is worth noticing that uniform-dLinUCB obtained around improvement against uniform-LinUCB, against N-LinUCB, and against CLUB. Clearly assuming all the users share the same preference over the recommendation candidates is very restrictive, which is confirmed by the improved performance from the personalized version over the non-personalized version of all bandit algorithms. Because dLinUCB maintains multiple slave models concurrently, each slave model is able to cover preference in a subgroup of users, i.e., achieving personalization automatically. We looked into those created slave models and found they closely correlated with the similarity between user features in different groups created by (Wu et al., 2016), although such external grouping was not disclosed to uniform-dLinUCB. Although adTS and WMDUCB1 can also detect changes, its slow detection and reaction to the changes made it even worse than LinUCB on this dataset. Meta-Bandit is sensitive to its hyper-parameters and performed similarly to WMDUCB1, so that we excluded it from this comparison.

The second setting is to build bandit models for each article, i.e., attaching on the article side to learn its popularity over time. Based on our quantitative analysis in the data set, we found that articles with short lifespans tend to have constant popularity. To emphasize the non-stationarity in this problem, we removed articles which existed less than 18 hours, and report the resulting performance in Figure 3 (b). We can find that dLinUCB performed comparably to LinUCB at the beginning, while the adTS baselines failed to recognize the popularity of those articles from the beginning, as the popularity of most articles did not change immediately. In the second half of this period, however, we can clearly realize the improvement from dLinUCB. To understand what kind of changes dLinUCB recognized in this data set, we plot the detected changes of five randomly selected articles in Figure 3 (c), in which dotted vertical lines are our detected change points on corresponding articles. As we can find in most articles the critical changes of ground-truth CTR can be accurately recognized. For example, article 1 and article 2 at around May 4, and article 3 at around May 5. Unfortunately, we do not have any detailed information about these articles to verify the changes; otherwise it would be interesting to correspond these detected changes to real-world events. In Figure 3 (b), we excluded the context-free bandit algorithms because they performed much worse and complicate the plots.

4.3. Experiments on LastFM & Delicious

(a) Normalized reward on LastFM (b) Normalized reward on Delicious (c) Cluster detection on LastFM (d) Cluster detection on Delicious
Figure 4. Performance comparison in LastFM & Delicious.

The LastFM dataset is extracted from the music streaming service Last.fm, and the Delicious dataset is extracted from the social bookmark sharing service Delicious. They were made availalbe on the HetRec 2011 workshop. The LastFM dataset contains 1892 users and 17632 items (artists). We treat the ‘listened artists’ in each user as positive feedback. The Delicious dataset contains 1861 users and 69226 items (URLs). We treat the bookmarked URLs in each user as positive feedback. Following the settings in (Cesa-Bianchi et al., 2013), we pre-processed these two datasets in order to fit them into the contextual bandit setting. Firstly, we used all tags associated with an item to create a TF-IDF feature vector to represent it. Then we used PCA to reduce the dimensionality of the feature vectors and retained the first 25 principle components to construct the context vectors, i.e., . We fixed the size of candidate arm pool to ; for a particular user , we randomly picked one item from his/her nonzero reward items, and randomly picked the other 24 from those zero reward items. We followed (Hartland et al., 2006) to simulate a non-stationary environment: we ordered observations chronologically inside each user, and built a single hybrid user by merging different users. Hence, the boundary between two consecutive batches of observations from two original users is treated as the preference change of the hybrid user.

Normalized rewards on these two datasets are reported in Figure 4

(a) & (b). dLinUCB outperformed both LinUCB and adTS on LastFM. As Delicious is a much sparser dataset, both adTS and dLinUCB are worse than LinUCB at the beginning; but as more observations become available, they quickly catch up. Since the distribution of items in these two datasets are highly skewed

(Cesa-Bianchi et al., 2013), which makes the observations for each item very sparse, the context-free bandits performed very poorly on these two datasets. We therefore chose to exclude the context-free bandit algorithms from all the comparisons on these two datasets in our result report.

Each slave model created for this hybrid user can be understood as serving for a sub-population of users. We qualitatively studied those created slave models to investigate what kind of stationarity they have captured. On the LastFM dataset, each user is associated with a list of tags he/she gave to the artists. The tags are usually descriptive and reflect users’ preference on music genres or artist styles. In each slave model, we use all the tags from the users being served by this model to generate a word cloud. Figure 5 are four representative groups identified on LastFM, which clearly correspond to four different music genres – rock music, metal music, pop music and hip-hop music. dLinUCB recognizes those meaningful clusters purely from user click feedback.

The way we simulate the non-stationary environment on these two datasets makes it possible for us to assess how well dLinUCB detects the changes. To ensure result visibility, we decide to report results obtained from user groups (otherwise there will be too many change points to plot). We first clustered all users in both of datasets into user groups according to their social network structure using spectral clustering

(Cesa-Bianchi et al., 2013). Then we selected the top 10 user groups according to the number of observations to create the hybrid user. We created a semi-oracle algorithm named as OracleLinUCB, which knows where the boundary is in the environment and resets LinUCB at each change point. The normalized rewards from these two datasets are reported in Figure 4 (c) & (d), in which the vertical lines are the actual change points in the environment and the detected points by dLinUCB. Since OracleLinUCB knows where the change is ahead of time, its performance can be seen as optimal. On LastMF, the observations are denser per user group, so that dLinUCB can almost always correctly identify the changes and achieve quite close performance to this oracle. But on Delicious, the sparse observations make it much harder for change detection; and more early and late detection happened in dLinUCB.

Figure 5. Word cloud of tags from four identified user groups in dLinUCB on LastFM dataset.

5. Conclusions & Future Work

In this paper, we develop a contextual bandit model dLinUCB for a piecewise stationary environment, which is very common in many important real-world applications but insufficiently investigated in existing works. By maintaining multiple contextual bandit models and tracking their reward estimation quality over time, dLinUCB adaptively updates its strategy for interacting with a changing environment. We rigorously prove an upper regret bound, which is arguably the tightest upper regret bound any algorithm can achieve in such an environment without further assumption about the environment. Extensive experimentation in simulation and three real-world datasets verified the effectiveness and the reliability of our proposed method.

As our future work, we are interested in extending dLinUCB to a continuously changing environment, such as Brownian motion, where reasonable approximation has to be made as a model becomes out of date right after it has been created. Right now, when serving for multiple users, dLinUCB treats them as identical or totally independent. As existing works have shed light on collaborative bandit learning (Wu et al., 2016; Wang et al., 2017; Gentile et al., 2017), it is meaningful to study non-stationary bandits in a collaborative environment. Last but not least, currently the master bandit model in dLinUCB does not utilize the available context information for ‘badness’ estimation. It is necessary to incorporate such information to improve the change detection accuracy, which would lead to a further reduced regret.

6. Acknowledgments

We thank the anonymous reviewers for their insightful comments. This work was supported in part by National Science Foundation Grant IIS-1553568 and IIS-1618948.

References

  • (1)
  • Abbasi-yadkori et al. (2011) Yasin Abbasi-yadkori, Dávid Pál, and Csaba Szepesvári. 2011. Improved Algorithms for Linear Stochastic Bandits. In NIPS. 2312–2320.
  • Auer (2002) Peter Auer. 2002. Using Confidence Bounds for Exploitation-Exploration Trade-offs.

    Journal of Machine Learning Research

    3 (2002), 397–422.
  • Auer et al. (2002) Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. 2002. Finite-time Analysis of the Multiarmed Bandit Problem. Mach. Learn. 47, 2-3 (May 2002), 235–256.
  • Auer et al. (1995) P. Auer, N. Cesa-Bianchi, Y. Freund, and Robert E. Schapire. 1995. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995. Proceedings., 36th Annual Symposium on. 322–331.
  • Cesa-Bianchi et al. (2013) Nicolò Cesa-Bianchi, Claudio Gentile, and Giovanni Zappella. 2013. A Gang of Bandits. In Pro. NIPS (2013).
  • Chu et al. (2011) Wei Chu, Lihong Li, Lev Reyzin, and Robert E Schapire. 2011. Contextual bandits with linear payoff functions. In AISTATS’11. 208–214.
  • Cialdini and Trost (1998) Robert B Cialdini and Melanie R Trost. 1998. Social influence: Social norms, conformity and compliance. (1998).
  • Fang Liu and Shroff (2018) Fang Liu, Joohyun Lee, and Ness Shroff. 2018. A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem (AAAI’18).
  • Filippi et al. (2010) Sarah Filippi, Olivier Cappe, Aurélien Garivier, and Csaba Szepesvári. 2010. Parametric bandits: The generalized linear case. In NIPS. 586–594.
  • Garivier and Moulines (Garivier and Moulines) Aurélien Garivier and Eric Moulines. On Upper-Confidence Bound Policies for Non-stationary Bandit Problems. In arXiv preprint arXiv:0805.3415 (2008).
  • Gentile et al. (2017) Claudio Gentile, Shuai Li, Purushottam Kar, Alexandros Karatzoglou, Giovanni Zappella, and Evans Etrue. 2017. On Context-Dependent Clustering of Bandits. In ICML’17. 1253–1262.
  • Gentile et al. (2014) Claudio Gentile, Shuai Li, and Giovanni Zappella. 2014. Online Clustering of Bandits. In ICML’14. 757–765.
  • Ghosh et al. (2017) Avishek Ghosh, Sayak Ray Chowdhury, and Aditya Gopalan. 2017. Misspecified Linear Bandits. CoRR abs/1704.06880 (2017).
  • Gittins (1979) John C Gittins. 1979. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological) (1979), 148–177.
  • Hariri, Mobasher, and Burke (Hariri et al.) Negar Hariri, Bamshad Mobasher, and Robin Burke. Adapting to User Preference Changes in Interactive Recommendation.
  • Hartland et al. (2006) Cedric Hartland, Sylvain Gelly, Nicolas Baskiotis, Olivier Teytaud, and Michele Sebag. 2006. Multi-armed Bandit, Dynamic Environments and Meta-Bandits. (Nov. 2006). https://hal.archives-ouvertes.fr/hal-00113668
  • Langford and Zhang (2008) John Langford and Tong Zhang. 2008. The epoch-greedy algorithm for multi-armed bandits with side information. In NIPS. 817–824.
  • Li et al. (2010) Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of 19th WWW. ACM, 661–670.
  • Li et al. (2011) Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. 2011. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In Proceedings of 4th WSDM. ACM, 297–306.
  • Li, Karatzoglou, and Gentile (Li et al.) Shuai Li, Alexandros Karatzoglou, and Claudio Gentile. Collaborative Filtering Bandits. In Proceedings of the 39th International ACM SIGIR.
  • Li et al. (2010) Wei Li, Xuerui Wang, Ruofei Zhang, Ying Cui, Jianchang Mao, and Rong Jin. 2010. Exploitation and exploration in a performance based contextual advertising system. In Proceedings of 16th SIGKDD. ACM, 27–36.
  • Luo et al. (2017) Haipeng Luo, Alekh Agarwal, and John Langford. 2017. Efficient Contextual Bandits in Non-stationary Worlds. arXiv preprint arXiv:1708.01799 (2017).
  • Slivkins and Upfal (2008) Alex Slivkins and Eli Upfal. 2008. Adapting to a Changing Environment: the Brownian Restless Bandits, In COLT08’. 343–354.
  • Wang et al. (2017) Huazheng Wang, Qingyun Wu, and Hongning Wang. 2017. Factorization Bandits for Interactive Recommendation.. In AAAI. 2695–2702.
  • Wu et al. (2016) Qingyun Wu, Huazheng Wang, Quanquan Gu, and Hongning Wang. 2016. Contextual Bandits in a Collaborative Environment. In Proceedings of the 39th International ACM SIGIR. ACM, 529–538.
  • Yu and Mannor (2009) Jia Yuan Yu and Shie Mannor. 2009. Piecewise-stationary Bandit Problems with Side Observations. In Proceedings of the 26th ICML (ICML ’09). 1177–1184.
  • Yue and Joachims (2009) Yisong Yue and Thorsten Joachims. 2009. Interactively optimizing information retrieval systems as a dueling bandits problem. In Proceedings of 26th ICML. ACM, 1201–1208.

7. Appendix

7.1. Additional Theorems

If the training instances in a linear bandit model come from multiple distributions/environments, we separate the training instances in into two sets and so that instances from are from the target stationary distribution, while instances in are not. In this case, we provide the confidence bound for the reward estimation in Theorem 7.1.

Theorem 7.1 (LinUCB with contamination).

In LinUCB with a contaminated instance set , with probability at least , we have , where , , and .

Comparing with , we can see that when the reward deviation of an arm (the portion of arms that do not satisfy Eq (1) in Assumption 1) is small with , the same confidence bound scaling can be achieved.

Theorem 7.2 (Chernoff Bound).

Let be random variables on such that . Define , for all we have,

7.2. Proof of Theorems and Lemmas

Proof sketch of Theorem 3.1 and Theorem 7.1.

The proof of Eq (7) in Theorem 3.1 and Theorem 7.1 are mainly based on the proof of Theorem 2 in (Abbasi-yadkori et al., 2011) and the concentration property of Gaussian noise. ∎

Proof of Lemma 3.3.

According to Chernoff Bound, we have , which concludes the proof. ∎

Proof of Lemma 3.4.

At time , which means the environment has already changed from to , we have,

(12)

According to Theorem 7.1, we have . Define as the upper bound of . If the change gap satisfies , we have .

Next, we will prove that can be achieved by a properly set . Similar as the proof in Step 2 of Theorem 3.2, where we bound , we have with a high probability that . When , and , can be achieved.

Eq (12) indicates when the environment has changed for a slave model , with a high probability of and slave model will not be updated, which avoids possible contamination in . According to the concentration inequality in Theorem 7.2, with a probability at least , we have,

With simple rewriting, we have when , , which means that with a probability at least ,