DeepAI
Log In Sign Up

Thompson Sampling for Contextual Bandits with Linear Payoffs

09/15/2012
by   Shipra Agrawal, et al.
0

Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state-of-the-art methods. However, many questions regarding its theoretical performance remained open. In this paper, we design and analyze a generalization of Thompson Sampling algorithm for the stochastic contextual multi-armed bandit problem with linear payoff functions, when the contexts are provided by an adaptive adversary. This is among the most important and widely studied versions of the contextual bandits problem. We provide the first theoretical guarantees for the contextual version of Thompson Sampling. We prove a high probability regret bound of Õ(d^3/2√(T)) (or Õ(d√(T (N)))), which is the best regret bound achieved by any computationally efficient algorithm available for this problem in the current literature, and is within a factor of √(d) (or √((N))) of the information-theoretic lower bound for this problem.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/15/2012

Further Optimal Regret Bounds for Thompson Sampling

Thompson Sampling is one of the oldest heuristics for multi-armed bandit...
05/24/2019

OSOM: A Simultaneously Optimal Algorithm for Multi-Armed and Linear Contextual Bandits

We consider the stochastic linear (multi-armed) contextual bandit proble...
01/08/2020

On Thompson Sampling for Smoother-than-Lipschitz Bandits

Thompson Sampling is a well established approach to bandit and reinforce...
04/26/2022

Rate-Constrained Remote Contextual Bandits

We consider a rate-constrained contextual multi-armed bandit (RC-CMAB) p...
06/30/2015

Scalable Discrete Sampling as a Multi-Armed Bandit Problem

Drawing a sample from a discrete distribution is one of the building com...
06/30/2022

Ranking in Contextual Multi-Armed Bandits

We study a ranking problem in the contextual multi-armed bandit setting....
03/23/2020

Contextual Bandit-Based Channel Selection for Wireless LANs with Interference-Driven Feature Extraction

This paper proposes a radio channel selection algorithm based on a conte...

Code Repositories

Reinforcement-Learning-Contextual-Bandits

None


view repo

1 Introduction

Multi-armed bandit (MAB) problems model the exploration/exploitation trade-off inherent in many sequential decision problems. There are many versions of multi-armed bandit problems; a particularly useful version is the contextual multi-armed bandit problem. In this problem, in each of rounds, a learner is presented with the choice of taking one out of [N]number of arms actions, referred to as arms. Before making the choice of which arm to play, the learner sees

-dimensional feature vectors

, referred to as “context”, associated with each arm . The learner uses these feature vectors along with the feature vectors and rewards of the arms played by her in the past to make the choice of the arm to play in the current round. Over time, the learner’s aim is to gather enough information about how the feature vectors and rewards relate to each other, so that she can predict, with some certainty, which arm is likely to give the best reward by looking at the feature vectors. The learner competes with a class of predictors, in which each predictor takes in the feature vectors and predicts which arm will give the best reward. If the learner can guarantee to do nearly as well as the predictions of the best predictor in hindsight (i.e., have low regret), then the learner is said to successfully compete with that class.

In the contextual bandits setting with linear payoff functions, the learner competes with the class of all “linear” predictors on the feature vectors. That is, a predictor is defined by a -dimensional parameter , and the predictor ranks the arms according to . We consider stochastic contextual bandit problem under linear realizability assumption, that is, we assume that there is an unknown underlying parameter such that the expected reward for each arm , given context , is . Under this realizability assumption, the linear predictor corresponding to is in fact the best predictor and the learner’s aim is to learn this underlying parameter. This realizability assumption is standard in the existing literature on contextual multi-armed bandits, e.g. (Auer, 2002; Filippi et al., 2010; Chu et al., 2011; Abbasi-Yadkori et al., 2011).

Thompson Sampling (TS) is one of the earliest heuristics for multi-armed bandit problems. The first version of this Bayesian heuristic is around 80 years old, dating to Thompson (1933)

. Since then, it has been rediscovered numerous times independently in the context of reinforcement learning, e.g., in

Wyatt (1997); Ortega & Braun (2010); Strens (2000). It is a member of the family of randomized probability matching

algorithms. The basic idea is to assume a simple prior distribution on the underlying parameters of the reward distribution of every arm, and at every time step, play an arm according to its posterior probability of being the best arm. The general structure of TS for the contextual bandits problem involves the following elements:

  1. a set of parameters ;

  2. a prior distribution on these parameters;

  3. past observations consisting of (context , reward ) for the past time steps;

  4. a likelihood function , which gives the probability of reward given a context and a parameter ;

  5. a posterior distribution , where is the likelihood function.

In each round, TS plays an arm according to its posterior probability of having the best parameter. A simple way to achieve this is to produce a sample of parameter for each arm, using the posterior distributions, and play the arm that produces the best sample. In this paper, we design and analyze a natural generalization of Thompson Sampling (TS) for contextual bandits; this generalization fits the above general structure, and uses Gaussian prior and Gaussian likelihood function. We emphasize that although TS is a Bayesian approach, the description of the algorithm and our analysis apply to the prior-free stochastic MAB model, and our regret bounds will hold irrespective of whether or not the actual reward distribution matches the Gaussian likelihood function used to derive this Bayesian heuristic. Thus, our bounds for TS algorithm are directly comparable to the UCB family of algorithms which form a frequentist approach to the same problem. One could interpret the priors used by TS as a way of capturing the current knowledge about the arms.

Recently, TS has attracted considerable attention. Several studies (e.g., Granmo (2010); Scott (2010); Graepel et al. (2010); Chapelle & Li (2011); May & Leslie (2011); Kaufmann et al. (2012)) have empirically demonstrated the efficacy of TS: Scott (2010) provides a detailed discussion of probability matching techniques in many general settings along with favorable empirical comparisons with other techniques. Chapelle & Li (2011) demonstrate that for the basic stochastic MAB problem, empirically TS achieves regret comparable to the lower bound of Lai & Robbins (1985); and in applications like display advertising and news article recommendation modeled by the contextual bandits problem, it is competitive to or better than the other methods such as UCB. In their experiments, TS is also more robust to delayed or batched feedback than the other methods. TS has been used in an industrial-scale application for CTR prediction of search ads on search engines (Graepel et al., 2010). Kaufmann et al. (2012) do a thorough comparison of TS with the best known versions of UCB and show that TS has the lowest regret in the long run.

However, the theoretical understanding of TS is limited. Granmo (2010) and May et al. (2011) provided weak guarantees, namely, a bound of on the expected regret in time . For the the basic (i.e. without contexts) version of the stochastic MAB problem, some significant progress was made by Agrawal & Goyal (2012), Kaufmann et al. (2012) and, more recently, by Agrawal & Goyal (2013b), who provided optimal regret bounds on the expected regret. But, many questions regarding theoretical analysis of TS remained open, including high probability regret bounds, and regret bounds for the more general contextual bandits setting. In particular, the contextual MAB problem does not seem easily amenable to the techniques used so far for analyzing TS for the basic MAB problem. In Section 3.1, we describe some of these challenges. Some of these questions and difficulties were also formally raised as a COLT 2012 open problem (Chapelle & Li, 2012).

In this paper, we use novel martingale-based analysis techniques to demonstrate that TS (i.e., our Gaussian prior based generalization of TS for contextual bandits) achieves high probability, near-optimal regret bounds for stochastic contextual bandits with linear payoff functions. To our knowledge, ours are the first non-trivial regret bounds for TS for the contextual bandits problem. Additionally, our results are the first high probability regret bounds for TS, even in the case of basic MAB problem. This essentially solves the COLT 2012 open problem by(Chapelle & Li, 2012) for contextual bandits with linear payoffs.

We provide a regret bound of , or (whichever is smaller), upper bound on the regret for Thompson Sampling algorithm. Moreover, the Thomspon Sampling algorithm we propose is efficient (runs in time polynomial in ) to implement as long as it is efficient to optimize a linear function over the set of arms (see Section 2.2 paragraph “Computational efficiency” for further discussion). Although the information theoretic lower bound for this problem is , an upper bound of is in fact the best achieved by any computationally efficient algorithm in the literature when number of arms is large (see the related work section 2.4 for a detailed discussion). To determine whether there is a gap between computational and information theoretic lower bound for this problem is an intriguing open question.

Our version of Thompson Sampling algorithm for the contextual MAB problem, described formally in Section 2.2, uses Gaussian prior and Gaussian likelihood functions. Our techniques can be extended to the use of other prior distributions, satisfying certain conditions, as discussed in Section 4.

2 Problem setting and algorithm description

2.1 Problem setting

There are arms. At time , a context vector , is revealed for every arm . [b]context vector for arm at time [d]The dimension of context vectors These context vectors are chosen by an adversary in an adaptive manner after observing the arms played and their rewards up to time , i.e. history ,

[H] where denotes the arm played at time . [a(t)]The arm played at time Given , the reward for arm at time is generated from an (unknown) distribution with mean , where is a fixed but unknown parameter. [mu]The unknown -dimensional parameter [r]Reward for arm at time

An algorithm for the contextual bandit problem needs to choose, at every time , an arm to play, using history and current contexts . Let denote the optimal arm at time , i.e. [a*]The optimal arm at time And let be the difference between the mean rewards of the optimal arm and of arm at time , i.e.,

Then, the regret at time is defined as

[regret]Regret at time The objective is to minimize the total regret in time . The time horizon is finite but possibly unknown.

We assume that is conditionally -sub-Gaussian for a constant , i.e.,

This assumption is satisfied whenever (see Remark 1 in Appendix A.1 of Filippi et al. (2010)). We will also assume that , , and for all (the norms, unless otherwise indicated, are -norms). These assumptions are required to make the regret bounds scale-free, and are standard in the literature on this problem. If instead, then our regret bounds would increase by a factor of .

Remark 1.

An alternative definition of regret that appears in the literature is

We can obtain the same regret bounds for this alternative definition of regret. The details are provided in the supplementary material in Appendix A.5.

2.2 Thompson Sampling algorithm

We use Gaussian likelihood function and Gaussian prior to design our version of Thompson Sampling algorithm. More precisely, suppose that the likelihood of reward at time , given context and parameter

, were given by the pdf of Gaussian distribution

. Here, .Let

[B]

[muhat]

(Empirical estimate of mean at time

) Then, if the prior for at time is given by , it is easy to compute the posterior distribution at time ,

as (details of this computation are in Appendix A.1). In our Thompson Sampling algorithm, at every time step , we will simply generate a sample from the distribution , and play the arm that maximizes .

We emphasize that the Gaussian priors and the Gaussian likelihood model for rewards are only used above to design the Thompson Sampling algorithm for contextual bandits. Our analysis of the algorithm allows these models to be completely unrelated to the actual reward distribution. The assumptions on the actual reward distribution are only those mentioned in Section 2.1, i.e., the -sub-Gaussian assumption.

  for all  do
     Sample from distribution .
     Play arm , and observe reward .
  end for
Algorithm 1 Thompson Sampling for Contextual bandits

[mutilde]-dimensional sample generated by from distribution .

Knowledge of time horizon :

The parameter can be replaced by at time , if the time horizon is not known. In fact, this is the version of Thompson Sampling that we will analyze. The analysis we provide can be applied as it is (with only notational changes) to the version using the fixed value of for all time steps, to get the same regret upper bound.

Computational efficiency:

Every step of Thompson Sampling (both algorithms) consists of generating a -dimensional sample from a multi-variate Gaussian distribution, and solving the problem . Therefore, even if the number of arms is large (or infinite), the above algorithms are efficient as long as the problem is efficiently solvable. This is the case, for example, when the set of arms at time is given by a -dimensional convex set (every vector in is a context vector, and thus corresponds to an arm). The problem to be solved at time step is then , where .

2.3 Our Results

Theorem 1.

With probability , the total regret for Thompson Sampling algorithm in time is bounded as

(1)

or,

(2)

whichever is smaller, for any , where is a parameter used by the algorithm.

Remark 2.

The regret bound in Equation (1) does not depend on , and are applicable to the case of infinite arms, with only notational changes required in the analysis.

2.4 Related Work

The contextual bandit problem with linear payoffs is a widely studied problem in statistics and machine learning often under different names as mentioned by

Chu et al. (2011): bandit problems with co-variates (Woodroofe, 1979; Sarkar, 1991), associative reinforcement learning (Kaelbling, 1994), associative bandit problems (Auer, 2002; Strehl et al., 2006), bandit problems with expert advice (Auer et al., 2002), and linear bandits (Dani et al., 2008; Abbasi-Yadkori et al., 2011; Bubeck et al., 2012). The name contextual bandits was coined in Langford & Zhang (2007).

A lower bound of for this problem was given by Dani et al. (2008), when the number of arms is allowed to be infinite. In particular, they prove their lower bound using an example where the set of arms correspond to all vectors in the intersection of a -dimensional sphere and a cube. They also provide an upper bound of , although their setting is slightly restrictive in the sense that the context vector for every arm is fixed in advanced and is not allowed to change with time. Abbasi-Yadkori et al. (2011) analyze a UCB-style algorithm and provide a regret upper bound of .

For finite , Chu et al. (2011) show a lower bound of for . Auer (2002) and Chu et al. (2011) analyze SupLinUCB, a complicated algorithm using UCB as a subroutine, for this problem. Chu et al. (2011) achieve a regret bound of with probability at least (Auer (2002) proves similar results). This regret bound is not applicable to the case of infinite arms, and assumes that context vectors are generated by an oblivious adversary. Also, this regret bound would give regret if is exponential in . The state-of-the-art bounds for linear bandits problem in case of finite are given by Bubeck et al. (2012). They provide an algorithm based on exponential weights, with regret of order for any finite set of actions. This also gives regret when is exponential in .

However, none of the above algorithms is efficient when is large, in particular, when the arms are given by all points in a continuous set of dimension . The algorithm of Bubeck et al. (2012) requires to maintain a distribution of support, and those of Chu et al. (2011), Dani et al. (2008), Abbasi-Yadkori et al. (2011) will need to solve an NP-hard problem at every step, even when the set of arms is given by a polytope of -dimensions. In contrast, the Thompson Sampling algorithm we propose will run in time polynomial in , as long as the one can efficiently optimize a linear function over the set of arms (maximize for , where is the set of arms). This can be done efficiently, for example, when the set of arms forms a convex set, and even for some combinatorial set of arms. We pay for this efficiency in terms of regret - our regret bounds are when is large or infinite, which is a factor of away from the information theoretic lower bound. The only other efficient algorithm for this problem that we are aware of was provided by Dani et al. (2008) (Algorithm 3.2), which also achieves a regret bound of . Thus, Thompson Sampling achieves the best regret upper bound achieved by an efficient algorithm in the literature. It is open problem to find a computationally efficient algorithm when is large or infinite, that achieves the information theoretic lower bound of on regret.

Our results demonstrate that the natural and efficient heuristic of Thompson Sampling can achieve theoretical bounds that are close to the best bounds. The main contribution of this paper is to provide new tools for analysis of Thompson Sampling algorithm for contextual bandits, which despite being popular and empirically attractive, has eluded theoretical analysis. We believe the techniques used in this paper will provide useful insights into the workings of this Bayesian algorithm, and may be useful for further improvements and extensions.

3 Regret Analysis: Proof of Theorem 1

3.1 Challenges and proof outline

The contextual version of the multi-armed bandit problem presents new challenges for the analysis of TS algorithm, and the techniques used so far for analyzing the basic multi-armed bandit problem by Agrawal & Goyal (2012); Kaufmann et al. (2012) do not seem directly applicable. Let us describe some of these difficulties and our novel ideas to resolve them.

In the basic MAB problem there are arms, with mean reward for arm , and the regret for playing a suboptimal arm is , where is the arm with the highest mean. Let us compare this to a -dimensional contextual MAB problem, where arm is associated with a parameter , but in addition, at every time , it is associated with a context , so that mean reward is . The best arm at time is the arm with the highest mean at time , and the regret for playing arm is .

In general, the basis of regret analysis for stochastic MAB is to prove that the variances of empirical estimates for all arms decrease fast enough, so that the regret incurred until the variances become small enough, is small. In the basic MAB, the variance of the empirical mean is inversely proportional to the number of plays

of arm at time . Thus, every time the suboptimal arm is played, we know that even though a regret of is incurred, there is also an improvement of exactly in the number of plays of that arm, and hence, corresponding decrease in the variance. The techniques for analyzing basic MAB rely on this observation to precisely quantify the exploration-exploitation tradeoff. On the other hand, the variance of the empirical mean for the contextual case is given by inverse of . When a suboptimal arm is played, if is small, the regret could be much higher than the improvement in .

In our proof, we overcome this difficulty by dividing the arms into two groups at any time: saturated and unsaturated arms, based on whether the standard deviation of the estimates for an arm is smaller or larger compared to the standard deviation for the optimal arm. The optimal arm is included in the group of unsaturated arms. We show that for the unsaturated arms, the regret on playing the arm can be bounded by a factor of the standard deviation, which improves every time the arm is played. This allows us to bound the total regret due to unsaturated arms. For the saturated arms, standard deviation is small, or in other words, the estimates of the means constructed so far are quite accurate in the direction of the current contexts of these arms, so that the algorithm is able to distinguish between them and the optimal arm. We utilize this observation to show that the probability of playing such arms is small, and at every time step an unsaturated arm will be played with some constant probability.



Below is a more technical outline of the proof of Theorem 1. At any time step , we divide the arms into two groups:

  • saturated arms defined as those with ,

  • unsaturated arms defined as those with ,

where and , () are deterministic functions of , defined later. Note that is the standard deviation of the estimate and

is the standard deviation of the random variable

.

We use concentration bounds for and to bound the regret at any time by . Now, if an unsaturated arm is played at time , then using the definition of unsaturated arms, the regret is at most . This is useful because of the inequality (derived along the lines of Auer (2002)), which allows us to bound the total regret due to unsaturated arms.

To bound the regret irrespective of whether a saturated or unsaturated arm is played at time , we lower bound the probability of playing an unsaturated arm at any time . More precisely, we define as the union of history and the contexts at time , and prove that for “most” (in a high probability sense) ,

where . Note that for is constant for . This observation allows us to establish that the expected regret at any time step is upper bounded in terms of regret due to playing an unsaturated arm at that time, i.e. in terms of . More precisely, we prove that for “most”

We use these observations to establish that , where

is a super-martingale difference process adapted to filtration . Then, using the Azuma-Hoeffding inequality for super-martingales, along with the inequality , we will obtain the desired high probability regret bound.

3.2 Formal proof

As mentioned earlier, we will analyze the version of Algorithm 1 that uses instead of at time .

We start with introducing some notations. For quick reference, the notations introduced below also appear in a table of notations at the beginning of the supplementary material.

Definition 1.

For all , define , and . By definition of in Algorithm 2, marginal distribution of each is Gaussian with mean and standard deviation . [theta] [s]

Definition 2.

Recall that , the difference between the mean reward of optimal arm and arm at time . [Delta]

Definition 3.

Define , [l] , [v_t] , [g_t] and . [p]

Definition 4.

Define and as the events that and are concentrated around their respective means. More precisely, define as the event that

Define as the event that

[Emu]Event [Etheta]Event

Definition 5.

An arm is called saturated at time if , and unsaturated otherwise. Let denote the set of saturated arms at time . Note that the optimal arm is always unsaturated at time , i.e., . An arm may keep shifting from saturated to unsaturated and vice-versa over time. [s]saturated armany arm with . [C(t)]The set of saturated arms at time .

Definition 6.

Define filtration as the union of history until time , and the contexts at time , i.e., . [F]

By definition, . Observe that the following quantities are determined by the history and the contexts at time , and hence are included in ,

  • ,

  • , for all ,

  • the identity of the optimal arm and the set of saturated arms ,

  • whether is true or not,

  • the distribution of

    , and hence the joint distribution of

    .

Lemma 1.

For all , , And, for all possible filtrations ,

Proof.

The complete proof of this lemma appears in Appendix A.3. The probability bound for will be proven using a concentration inequality given by Abbasi-Yadkori et al. (2011), stated as Lemma 8 in Appendix A.2. The -sub-Gaussian assumption on rewards will be utilized here. The probability bound for will be proven using a concentration inequality for Gaussian random variables from Abramowitz & Stegun (1964) stated as Lemma 6 in Appendix A.2 . ∎

The next lemma lower bounds the probability that for the optimal arm at time will exceed its mean reward .

Lemma 2.

For any filtration such that is true,

Proof.

The proof uses anti-concentration of Gaussian random variable , which has mean and standard deviation , provided by Lemma 6 in Appendix A.2, and the concentration of around provided by the event . The details of the proof are in Appendix A.4. ∎

The following lemma bounds the probability of playing saturated arms in terms of the probability of playing unsaturated arms.

Lemma 3.

For any filtration such that is true,

Proof.

The algorithm chooses the arm with the highest value of to be played at time . Therefore, if is greater than for all saturated arms, i.e., , then one of the unsaturated arms (which include the optimal arm and other suboptimal unsaturated arms) must be played. Therefore,

(3)

By definition, for all saturated arms, i.e. for all , . Also, if both the events and are true then, by the definitions of these events, for all , . Therefore, given an such that is true, either is false, or else for all ,

Hence, for any such that is true,

The last inequality uses Lemma 2 and Lemma 1. ∎

Lemma 4.

For any filtration such that is true,

Proof.

Let denote the unsaturated arm with smallest , i.e.

Note that since and for all are fixed on fixing , so is .

Now, using Lemma 3, for any such that is true,

Now, if events and are true, then for all , by definition, . Using this observation along with the fact that for all ,

Therefore, for any such that is true either or is false. Therefore,

In the first inequality we used that for all , . The second inequality used the inequality derived in the beginning of this proof, and Lemma 1 to apply . The third inequality used the observation that .

Definition 7.

Recall that was defined as, Define

Next, we establish a super-martingale process that will form the basis of our proof of the high-probability regret bound.

Definition 8.

Let

Lemma 5.

is a super-martingale process with respect to filtration .

Proof.

See Definition 9 in Appendix A.2 for the definition of super-martingales. We need to prove that for all , and any , , i.e.

Note that whether is true or not is completely determined by . If is such that is not true, then , and the above inequality holds trivially. And, for such that holds, the inequality follows from Lemma 4. ∎

Now, we are ready to prove Theorem 1.

Proof of Theorem 1

Note that is bounded, . Thus, we can apply Azuma-Hoeffding inequality (see Lemma 7 in Appendix A.2), to obtain that with probability ,

Note that is a constant. Also, by definition, . Therefore, from above equation, with probability ,

Now, we can use which can be derived along the lines of Lemma 3 of Chu et al. (2011) using Lemma 11 of Auer (2002) (see Appendix A.5 for details). Also, by definition