Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits

02/04/2014 ∙ by Alekh Agarwal, et al. ∙ 0

We present a new algorithm for the contextual bandit learning problem, where the learner repeatedly takes one of K actions in response to the observed context, and observes the reward only for that chosen action. Our method assumes access to an oracle for solving fully supervised cost-sensitive classification problems and achieves the statistically optimal regret guarantee with only Õ(√(KT/ N)) oracle calls across all T rounds, where N is the number of policies in the policy class we compete against. By doing so, we obtain the most practical contextual bandit learning algorithm amongst approaches that work for general policy classes. We further conduct a proof-of-concept experiment which demonstrates the excellent computational and prediction performance of (an online variant of) our algorithm relative to several baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the contextual bandit problem, an agent collects rewards for actions taken over a sequence of rounds; in each round, the agent chooses an action to take on the basis of (i) context (or features) for the current round, as well as (ii) feedback, in the form of rewards, obtained in previous rounds. The feedback is incomplete

: in any given round, the agent observes the reward only for the chosen action; the agent does not observe the reward for other actions. Contextual bandit problems are found in many important applications such as online recommendation and clinical trials, and represent a natural half-way point between supervised learning and reinforcement learning. The use of features to encode context is inherited from supervised machine learning, while

exploration is necessary for good performance as in reinforcement learning.

The choice of exploration distribution on actions is important. The strongest known results (Auer et al., 2002; McMahan and Streeter, 2009; Beygelzimer et al., 2011) provide algorithms that carefully control the exploration distribution to achieve an optimal regret after rounds of

with probability at least

, relative to a set of policies mapping contexts to actions (where is the number of actions). The regret is the difference between the cumulative reward of the best policy in and the cumulative reward collected by the algorithm. Because the bound has a mild logarithmic dependence on , the algorithm can compete with very large policy classes that are likely to yield high rewards, in which case the algorithm also earns high rewards. However, the computational complexity of the above algorithms is linear in , making them tractable for only simple policy classes.

A sub-linear in

running time is possible for policy classes that can be efficiently searched. In this work, we use the abstraction of an optimization oracle to capture this property: given a set of context/reward vector pairs, the oracle returns a policy in

with maximum total reward. Using such an oracle in an i.i.d. setting (formally defined in Section 2.1), it is possible to create -greedy (Sutton and Barto, 1998)

or epoch-greedy 

(Langford and Zhang, 2007) algorithms that run in time with only a single call to the oracle per round. However, these algorithms have suboptimal regret bounds of because the algorithms randomize uniformly over actions when they choose to explore.

The  algorithm of Dudík et al. (2011a) achieves the optimal regret bound (up to logarithmic factors) in the i.i.d. setting, and runs in time with calls to the optimization oracle per round. Naively this would amount to calls to the oracle over rounds, although a doubling trick from our analysis can be adapted to ensure only calls to the oracle are needed over all rounds in the  algorithm. This is a fascinating result because it shows that the oracle can provide an exponential speed-up over previous algorithms with optimal regret bounds. However, the running time of this algorithm is still prohibitive for most natural problems owing to the scaling.

In this work, we prove the following111Throughout this paper, we use the notation to suppress dependence on logarithmic factors in and , as well as (i.e. terms which are .:

Theorem 1.

There is an algorithm for the i.i.d. contextual bandit problem with an optimal regret bound requiring calls to the optimization oracle over rounds, with probability at least .

Concretely, we make calls to the oracle with a net running time of , vastly improving over the complexity of . The major components of the new algorithm are (i) a new coordinate descent procedure for computing a very sparse distribution over policies which can be efficiently sampled from, and (ii) a new epoch structure which allows the distribution over policies to be updated very infrequently. We consider variants of the epoch structure that make different computational trade-offs; on one extreme we concentrate the entire computational burden on rounds with oracle calls each time, while on the other we spread our computation over rounds with oracle calls for each of these rounds. We stress that in either case, the total number of calls to the oracle is only sublinear in . Finally, we develop a more efficient online variant, and conduct a proof-of-concept experiment showing low computational complexity and high reward relative to several natural baselines.

Motivation and related work.

The EXP4-family of algorithms (Auer et al., 2002; McMahan and Streeter, 2009; Beygelzimer et al., 2011) solve the contextual bandit problem with optimal regret by updating weights (multiplicatively) over all policies in every round. Except for a few special cases (Helmbold and Schapire, 1997; Beygelzimer et al., 2011), the running time of such measure-based algorithms is generally linear in the number of policies.

In contrast, the algorithm of Dudík et al. (2011a) is based on a natural abstraction from supervised learning—the ability to efficiently find a function in a rich function class that minimizes the loss on a training set. This abstraction is encapsulated in the notion of an optimization oracle, which is also useful for -greedy (Sutton and Barto, 1998) and epoch-greedy (Langford and Zhang, 2007) algorithms. However, these latter algorithms have only suboptimal regret bounds.

Another class of approaches based on Bayesian updating is Thompson sampling 

(Thompson, 1933; Li, 2013), which often enjoys strong theoretical guarantees in expectation over the prior and good empirical performance (Chapelle and Li, 2011). Such algorithms, as well as the closely related upper-confidence bound algorithms (Auer, 2002; Chu et al., 2011), are computationally tractable in cases where the posterior distribution over policies can be efficiently maintained or approximated. In our experiments, we compare to a strong baseline algorithm that uses this approach (Chu et al., 2011).

To circumvent the running time barrier, we restrict attention to algorithms that only access the policy class via the optimization oracle. Specifically, we use a cost-sensitive classification oracle, and a key challenge is to design good supervised learning problems for querying this oracle. The algorithm of Dudík et al. (2011a) uses a similar oracle to construct a distribution over policies that solves a certain convex program. However, the number of oracle calls in their work is prohibitively large, and the statistical analysis is also rather complex.222The paper of Dudík et al. (2011a) is colloquially referred to, by its authors, as the “monster paper” (Langford, 2014).

Main contributions.

In this work, we present a new and simple algorithm for solving a similar convex program as that used by . The new algorithm is based on coordinate descent: in each iteration, the algorithm calls the optimization oracle to obtain a policy; the output is a sparse distribution over these policies. The number of iterations required to compute the distribution is small—at most in any round . In fact, we present a more general scheme based on epochs and warm start in which the total number of calls to the oracle is, with high probability, just over all rounds; we prove that this is nearly optimal for a certain class of optimization-based algorithms. The algorithm is natural and simple to implement, and we provide an arguably simpler analysis than that for . Finally, we report proof-of-concept experimental results using a variant algorithm showing strong empirical performance.

2 Preliminaries

In this section, we recall the i.i.d. contextual bandit setting and some basic techniques used in previous works (Auer et al., 2002; Beygelzimer et al., 2011; Dudík et al., 2011a).

2.1 Learning Setting

Let be a finite set of actions, be a space of possible contexts (e.g., a feature space), and be a finite set of policies that map contexts to actions .333Extension to VC classes is simple using standard arguments. Let be the set of non-negative weights over policies with total weight at most one, and let be the set of non-negative reward vectors.

Let

be a probability distribution over

, the joint space of contexts and reward vectors; we assume actions’ rewards from are always in the interval . Let denote the marginal distribution of over .

In the i.i.d. contextual bandit setting, the context/reward vector pairs over all rounds are randomly drawn independently from . In round , the agent first observes the context , then (randomly) chooses an action , and finally receives the reward for the chosen action. The (observable) record of interaction resulting from round is the quadruple ; here, is the probability that the agent chose action . We let denote the history (set) of interaction records in the first rounds. We use the shorthand notation to denote expectation when a context is chosen from the contexts in uniformly at random.

Let denote the expected (instantaneous) reward of a policy , and let be a policy that maximizes the expected reward (the optimal policy). Let denote the expected (instantaneous) regret of a policy relative to the optimal policy. Finally, the (empirical cumulative) regret of the agent after rounds444We have defined empirical cumulative regret as being relative to , rather than to the empirical reward maximizer . However, in the i.i.d. setting, the two do not differ by more than with probability at least . is defined as

2.2 Inverse Propensity Scoring

An unbiased estimate of a policy’s reward may be obtained from a history of interaction records

using inverse propensity scoring (; also called inverse probability weighting): the expected reward of policy is estimated as

(1)

This technique can be viewed as mapping of interaction records to context/reward vector pairs , where is a fictitious reward vector that assigns to the chosen action a scaled reward (possibly greater than one), and assigns to all other actions zero rewards. This transformation is detailed in Algorithm 3 (in Appendix A); we may equivalently define by . It is easy to verify that , as is indeed the agent’s probability (conditioned on ) of picking action . This implies is an unbiased estimator for any history .

Let denote a policy that maximizes the expected reward estimate based on inverse propensity scoring with history ( can be arbitrary), and let denote estimated regret relative to . Note that is generally not an unbiased estimate of , because is not always .

2.3 Optimization Oracle

One natural mode for accessing the set of policies is enumeration, but this is impractical in general. In this work, we instead only access via an optimization oracle which corresponds to a cost-sensitive learner. Following Dudík et al. (2011a), we call this oracle 555Cost-sensitive learners often need a cost instead of reward, in which case we use ..

Definition 1.

For a set of policies , the oracle () is an algorithm, which for any sequence of context and reward vectors, , returns

2.4 Projections and Smoothing

In each round, our algorithm chooses an action by randomly drawing a policy from a distribution over , and then picking the action recommended by on the current context . This is equivalent to drawing an action according to

. For keeping the variance of reward estimates from

in check, it is desirable to prevent the probability of any action from being too small. Thus, as in previous work, we also use a smoothed projection for , . Every action has probability at least under .

For technical reasons, our algorithm maintains non-negative weights over policies that sum to at most one, but not necessarily equal to one; hence, we put any remaining mass on a default policy to obtain a legitimate probability distribution over policies . We then pick an action from the smoothed projection of as above. This sampling procedure is detailed in Algorithm 4 (in Appendix A).

3 Algorithm and Main Results

Our algorithm () is an epoch-based variant of the algorithm of Dudík et al. (2011a) and is given in Algorithm 1. Like , solves an optimization problem (OP) to obtain a distribution over policies to sample from (Step 7), but does so on an epoch schedule, i.e., only on certain pre-specified rounds . The only requirement of the epoch schedule is that the length of epoch is bounded as . For simplicity, we assume for , and .

The crucial step here is solving (OP). Before stating the main result, let us get some intuition about this problem. The first constraint, Eq. (2), requires the average estimated regret of the distribution over policies to be small, since is a rescaled version of the estimated regret of policy

. This constraint skews our distribution to put more mass on “good policies” (as judged by our current information), and can be seen as the exploitation component of our algorithm. The second set of constraints, Eq. (

3), requires the distribution to place sufficient mass on the actions chosen by each policy , in expectation over contexts. This can be thought of as the exploration constraint, since it requires the distribution to be sufficiently diverse for most contexts. As we will see later, the left hand side of the constraint is a bound on the variance of our reward estimates for policy , and the constraint requires the variance to be controlled at the level of the estimated regret of . That is, we require the reward estimates to be more accurate for good policies than we do for bad ones, allowing for much more adaptive exploration than the uniform exploration of -greedy style algorithms.

This problem is very similar to the one in Dudík et al. (2011a), and our coordinate descent algorithm in Section 3.1 gives a constructive proof that the problem is feasible. As in Dudík et al. (2011a), we have the following regret bound:

Theorem 2.

Assume the optimization problem (OP) can be solved whenever required in Algorithm 1. With probability at least , the regret of Algorithm 1 () after rounds is

0:  Epoch schedule , allowed failure probability .
1:  Initial weights , initial epoch . Define for all .
2:  for round  do
3:     Observe context .
4:     .
5:     Select action and observe reward .
6:     if  then
7:        Let be a solution to (OP) with history and minimum probability .
8:        .
9:     end if
10:  end for
Algorithm 1 Importance-weighted LOw-Variance Epoch-Timed Oracleized CONtextual BANDITS algorithm ()

Optimization Problem (OP) Given a history and minimum probability , define for , and find such that

(2) (3)

3.1 Solving (Op) via Coordinate Descent

We now present a coordinate descent algorithm to solve (OP). The pseudocode is given in Algorithm 2. Our analysis, as well as the algorithm itself, are based on a potential function which we use to measure progress. The algorithm can be viewed as a form of coordinate descent applied to this same potential function. The main idea of our analysis is to show that this function decreases substantially on every iteration of this algorithm; since the function is nonnegative, this gives an upper bound on the total number of iterations as expressed in the following theorem.

Theorem 3.

Algorithm 2 (with ) halts in at most iterations, and outputs a solution to (OP).

0:  History , minimum probability , initial weights .
1:  Set .
2:  loop
3:      Define, for all ,
4:     if  then
5:        Replace by , where
(4)
6:     end if
7:     if there is a policy for which  then
8:         Add the (positive) quantity
to and leave all other weights unchanged.
9:     else
10:         Halt and output the current set of weights .
11:     end if
12:  end loop
Algorithm 2 Coordinate Descent Algorithm

3.2 Using an Optimization Oracle

We now show how to implement Algorithm 2 via (c.f. Section 2.3).

Lemma 1.

Algorithm 2 can be implemented using one call to before the loop is started, and one call for each iteration of the loop thereafter.

Proof.

At the very beginning, before the loop is started, we compute the best empirical policy so far, , by calling on the sequence of historical contexts and estimated reward vectors; i.e., on , for .

Next, we show that each iteration in the loop of Algorithm 2 can be implemented via one call to . Going over the pseudocode, first note that operations involving in Step 4 can be performed efficiently since has sparse support. Note that the definitions in Step 3 don’t actually need to be computed for all policies , as long as we can identify a policy for which . We can identify such a policy using one call to as follows.

First, note that for any policy , we have

and

Now consider the sequence of historical contexts and reward vectors, for , where for any action we define

(5)

It is easy to check that

Since is a constant independent of , we have

and hence, calling once on the sequence for , we obtain a policy that maximizes , and thereby identify a policy for which whenever one exists. ∎

3.3 Epoch Schedule

Recalling the setting of in Algorithm 1, Theorem 3 shows that Algorithm 2 solves (OP) with calls to in round . Thus, if we use the epoch schedule (i.e., run Algorithm 2 in every round), then we get a total of calls to over all rounds. This number can be dramatically reduced using a more carefully chosen epoch schedule.

Lemma 2.

For the epoch schedule , the total number of calls to is .

Proof.

The epoch schedule satisfies the requirement . With this epoch schedule, Algorithm 2 is run only times over rounds, leading to total calls to over the entire period. ∎

3.4 Warm Start

We now present a different technique to reduce the number of calls to . This is based on the observation that practically speaking, it seems terribly wasteful, at the start of a new epoch, to throw out the results of all of the preceding computations and to begin yet again from nothing. Instead, intuitively, we expect computations to be more moderate if we begin again where we left off last, i.e., a “warm-start” approach. Here, when Algorithm 2 is called at the end of epoch , we use (the previously computed weights) rather than .

We can combine warm-start with a different epoch schedule to guarantee total calls to , spread across calls to Algorithm 2.

Lemma 3.

Define the epoch schedule and for (this satisfies ). With high probability, the warm-start variant of Algorithm 1 makes calls to over rounds and calls to Algorithm 2.

3.5 Computational Complexity

So far, we have only considered computational complexity in terms of the number of oracle calls. However, the reduction also involves the creation of cost-sensitive classification examples, which must be accounted for in the net computational cost. As observed in the proof of Lemma 1 (specifically Eq. (5)), this requires the computation of the probabilities for when the oracle has to be invoked at round . According to Lemma 3, the support of the distribution at time can be over at most policies (same as the number of calls to ). This would suggest a computational complexity of for querying the oracle at time , resulting in an overall computation cost scaling with .

We can, however, do better with some natural bookkeeping. Observe that at the start of round , the conditional distributions for can be represented as a table of size , where rows and columns correspond to actions and contexts. Upon receiving the new example in round , the corresponding -th column can be added to this table in time (where denotes the support of ), using the projection operation described in Section 2.4. Hence the net cost of these updates, as a function of and , scales with as . Furthermore, the cost-sensitive examples needed for the can be obtained by a simple table lookup now, since the action probabilities are directly available. This involves table lookups when the oracle is invoked at time , and again results in an overall cost scaling as . Finally, we have to update the table when the distribution is updated in Algorithm 2. If we find ourselves in the rescaling step 4, we can simply store the constant . When we enter step 8 of the algorithm, we can do a linear scan over the table, rescaling and incrementing the entries. This also resutls in a cost of when the update happens at time , resulting in a net scaling as . Overall, we find that the computational complexity of our algorithm, modulo the oracle running time, is .

3.6 A Lower Bound on the Support Size

An attractive feature of the coordinate descent algorithm, Algorithm 2, is that the number of oracle calls is directly related to the number of policies in the support of . Specifically, for the doubling schedule of Section 3.3, Theorem 3 implies that we never have non-zero weights for more than policies in epoch . Similarly, the total number of oracle calls for the warm-start approach in Section 3.4 bounds the total number of policies which ever have non-zero weight over all rounds. The support size of the distributions in Algorithm 1 is crucial to the computational complexity of sampling an action (Step 4 of Algorithm 1).

In this section, we demonstrate a lower bound showing that it is not possible to construct substantially sparser distributions that also satisfy the low-variance constraint (3) in the optimization problem (OP). To formally define the lower bound, fix an epoch schedule and consider the following set of non-negative vectors over policies:

(The distribution computed by Algorithm 1 is in .) Recall that denotes the support of (the set of policies where puts non-zero entries). We have the following lower bound on .

Theorem 4.

For any epoch schedule and any sufficiently large, there exists a distribution over and a policy class such that, with probability at least ,

The proof of the theorem is deferred to Appendix E. In the context of our problem, this lower bound shows that the bounds in Lemma 2 and Lemma 3 are unimprovable, since the number of calls to is at least the size of the support, given our mode of access to .

4 Regret Analysis

In this section, we outline the regret analysis for our algorithm , with details deferred to Appendix B and Appendix C.

The deviations of the policy reward estimates are controlled by (a bound on) the variance of each term in Eq. (1): essentially the left-hand side of Eq. (3) from (OP), except with replaced by . Resolving this discrepancy is handled using deviation bounds, so Eq. (3) holds with , with worse right-hand side constants.

The rest of the analysis, which deviates from that of , compares the expected regret of any policy with the estimated regret using the variance constraints Eq. (3):

Lemma 4 (Informally).

With high probability, for each such that , each round in epoch , and each , .

This lemma can easily be combined with the constraint Eq. (2) from (OP): since the weights used in any round in epoch satisfy , we obtain a bound on the (conditionally) expected regret in round using the above lemma: with high probability,

Summing these terms up over all rounds and applying martingale concentration gives the final regret bound in Theorem 2.

5 Analysis of the Optimization Algorithm

In this section, we give a sketch of the analysis of our main optimization algorithm for computing weights on each epoch as in Algorithm 2. As mentioned in Section 3.1, this analysis is based on a potential function.

Since our attention for now is on a single epoch , here and in what follows, when clear from context, we drop from our notation and write simply , , etc. Let

be the uniform distribution over the action set

. We define the following potential function for use on epoch :

(6)

The function in Eq. (6) is defined for all vectors . Also, denotes the unnormalized relative entropy between two nonnegative vectors and over the action space (or any set) :

This number is always nonnegative. Here, denotes the “distribution” (which might not sum to ) over induced by for context as given in Section 2.4. Thus, ignoring constants, this potential function is a combination of two terms: The first measures how far from uniform are the distributions induced by , and the second is an estimate of expected regret under since is proportional to the empirical regret of . Making small thus encourages to choose actions as uniformly as possible while also incurring low regret — exactly the aims of our algorithm. The constants that appear in this definition are for later mathematical convenience.

For further intuition, note that, by straightforward calculus, the partial derivative is roughly proportional to the variance constraint for given in Eq. (3) (up to a slight mismatch of constants). This shows that if this constraint is not satisfied, then is likely to be negative, meaning that can be decreased by increasing . Thus, the weight vector that minimizes satisfies the variance constraint for every policy . It turns out that this minimizing also satisfies the low regret constraint in Eq. (2), and also must sum to at most ; in other words, it provides a complete solution to our optimization problem. Algorithm 2 does not fully minimize , but it is based roughly on coordinate descent. This is because in each iteration one of the weights (coordinate directions) is increased. This weight is one whose corresponding partial derivative is large and negative.

To analyze the algorithm, we first argue that it is correct in the sense of satisfying the required constraints, provided that it halts.

Lemma 5.

If Algorithm 2 halts and outputs a weight vector , then the constraints Eq. (3) and Eq. (2) must hold, and furthermore the sum of the weights is at most .

The proof is rather straightforward: Following Step 4, Eq. (2) must hold, and also the weights must sum to . And if the algorithm halts, then for all , which is equivalent to Eq. (3).

What remains is the more challenging task of bounding the number of iterations until the algorithm does halt. We do this by showing that significant progress is made in reducing on every iteration. To begin, we show that scaling as in Step 4 cannot cause to increase.

Lemma 6.

Let be a weight vector such that , and let be as in Eq. (4). Then .

Proof sketch.

We consider as a function of , and argue that its derivative (with respect to ) at the value of given in the lemma statement is always nonnegative. Therefore, by convexity, it is nondecreasing for all values exceeding . Since , this proves the lemma. ∎

Next, we show that substantial progress will be made in reducing each time that Step 8 is executed.

Lemma 7.

Let denote a set of weights and suppose, for some policy , that . Let be a new set of weights which is an exact copy of except that where . Then

(7)

Proof sketch.

We first compute exactly the change in potential for general . Next, we apply a second-order Taylor approximation, which is maximized by the used in the algorithm. The Taylor approximation, for this , yields a lower bound which can be further simplified using the fact that always, and our assumption that . This gives the bound stated in the lemma. ∎

So Step 4 does not cause to increase, and Step 8 causes to decrease by at least the amount given in Lemma 7. This immediately implies Theorem 3: for , the initial potential is bounded by , and it is never negative, so the number of times Step 8 is executed is bounded by as required.

5.1 Epoching and Warm Start

As shown in Section 2.3, the bound on the number of iterations of the algorithm from Theorem 3 also gives a bound on the number of times the oracle is called. To reduce the number of oracle calls, one approach is the “doubling trick” of Section 3.3, which enables us to bound the total combined number of iterations of Algorithm 2 in the first rounds is only . This means that the average number of calls to the arg-max oracle is only per round, meaning that the oracle is called far less than once per round, and in fact, at a vanishingly low rate.

We now turn to warm-start approach of Section 3.4, where in each epoch we initialize the coordinate descent algorithm with , i.e. the weights computed in the previous epoch . To analyze this, we bound how much the potential changes from at the end of epoch to at the very start of epoch . This, combined with our earlier results regarding how quickly Algorithm 2 drives down the potential, we are able to get an overall bound on the total number of updates across rounds.

Lemma 8.

Let be the largest integer for which . With probability at least , for all , the total epoch-to-epoch increase in potential is

where is the largest integer for which .

Proof sketch.

The potential function, as written in Eq. (6), naturally breaks into two pieces whose epoch-to-epoch changes can be bounded separately. Changes affecting the relative entropy term on the left can be bounded, regardless of , by taking advantage of the manner in which these distributions are smoothed. For the other term on the right, it turns out that these epoch-to-epoch changes are related to statistical quantities which can be bounded with high probability. Specifically, the total change in this term is related first to how the estimated reward of the empirically best policy compares to the expected reward of the optimal policy; and second, to how the reward received by our algorithm compares to that of the optimal reward. From our regret analysis, we are able to show that both of these quantities will be small with high probability. ∎

This lemma, along with Lemma 7 can be used to further establish Lemma 3. We only provide an intuitive sketch here, with the details deferred to the appendix. As we observe in Lemma 8, the total amount that the potential increases across rounds is at most . On the other hand, Lemma 7 shows that each time is updated by Algorithm 2 the potential decreases by at least (using our choice of ). Therefore, the total number of updates of the algorithm totaled over all rounds is at most . For instance, if we use and for , then the weight vector is only updated about times in rounds, and on each of those rounds, Algorithm 2 requires iterations, on average, giving the claim in Lemma 3.

6 Experimental Evaluation

Algorithm -greedy Explore-first Bagging LinUCB Online Cover Supervised
P.V. Loss
Searched first bags dim, minibatch-10 cover nothing
Seconds
Table 1:

Progressive validation loss, best hyperparameter values, and running times of various algorithm on RCV1.

In this section we evaluate a variant of Algorithm 1 against several baselines. While Algorithm 1 is significantly more efficient than many previous approaches, the overall computational complexity is still at least plus the total cost of the oracle calls, as discussed in Section 3.5. This is markedly larger than the complexity of an ordinary supervised learning problem where it is typically possible to perform an -complexity update upon receiving a fresh example using online algorithms.

A natural solution is to use an online oracle that is stateful and accepts examples one by one. An online cost-sensitive classification (CSC) oracle takes as input a weighted example and returns a predicted class (corresponding to one of actions in our setting). Since the oracle is stateful, it remembers and uses examples from all previous calls in answering questions, thereby reducing the complexity of each oracle invocation to as in supervised learning. Using several such oracles, we can efficiently track a distribution over good policies and sample from it. We detail this approach (which we call Online Cover) in the full version of the paper. The algorithm maintains a uniform distribution over a fixed number of policies where is a parameter of the algorithm. Upon receiving a fresh example, it updates all policies with the suitable CSC examples (Eq. (5)). The specific CSC oracle we use is a reduction to squared-loss regression (Algorithms 4 and 5 of Beygelzimer and Langford (2009)) which is amenable to online updates. Our implementation is included in Vowpal Wabbit.666http://hunch.net/~vw. The implementation is in the file cbify.cc and is enabled using --cover.

Due to lack of public datasets for contextual bandit problems, we use a simple supervised-to-contextual-bandit transformation (Dudík et al., 2011b) on the CCAT document classification problem in RCV1 (Lewis et al., 2004). This dataset has examples and TF-IDF features. We treated the class labels as actions, and one minus 0/1-loss as the reward. Our evaluation criteria is progressive validation (Blum et al., 1999) on 0/1 loss. We compare several baseline algorithms to Online Cover; all algorithms take advantage of linear representations which are known to work well on this dataset. For each algorithm, we report the result for the best parameter settings (shown in Table 1).

  1. -greedy (Sutton and Barto, 1998) explores randomly with probability and otherwise exploits.

  2. Explore-first is a variant that begins with uniform exploration, then switches to an exploit-only phase.

  3. A less common but powerful baseline is based on bagging: multiple predictors (policies) are trained with examples sampled with replacement. Given a context, these predictors yield a distribution over actions from which we can sample.

  4. LinUCB (Auer, 2002; Chu et al., 2011) has been quite effective in past evaluations (Li et al., 2010; Chapelle and Li, 2011). It is impractical to run “as is” due to high-dimensional matrix inversions, so we report results for this algorithm after reducing to dimensions via random projections. Still, the algorithm required hours777The linear algebra routines are based on Intel MKL package.. An alternative is to use diagonal approximation to the covariance, which runs substantially faster (1 hour), but gives a worse error of 0.137.

  5. Finally, our algorithm achieves the best loss of . Somewhat surprisingly, the minimum occurs for us with a cover set of size 1—apparently for this problem the small decaying amount of uniform random sampling imposed is adequate exploration. Prediction performance is similar with a larger cover set.

All baselines except for LinUCB are implemented as a simple modification of Vowpal Wabbit. All reported results use default parameters where not otherwise specified. The contextual bandit learning algorithms all use a doubly robust reward estimator instead of the importance weighted estimators used in our analysis Dudík et al. (2011b).

Because RCV1 is actually a fully supervised dataset, we can apply a fully supervised online multiclass algorithm to solve it. We use a simple one-against-all implementation to reduce this to binary classification, yielding an error rate of which is competitive with the best previously reported results. This is effectively a lower bound on the loss we can hope to achieve with algorithms using only partial information. Our algorithm is less than 2.3 times slower and nearly achieves the bound. Hence on this dataset, very little further algorithmic improvement is possible.

7 Conclusions

In this paper we have presented the first practical algorithm to our knowledge that attains the statistically optimal regret guarantee and is computationally efficient in the setting of general policy classes. A remarkable feature of the algorithm is that the total number of oracle calls over all rounds is sublinear—a remarkable improvement over previous works in this setting. We believe that the online variant of the approach which we implemented in our experiments has the right practical flavor for a scalable solution to the contextual bandit problem. In future work, it would be interesting to directly analyze the Online Cover algorithm.

Acknowledgements

We thank Dean Foster and Matus Telgarsky for helpful discussions. Part of this work was completed while DH and RES were visiting Microsoft Research.

References

  • Auer (2002) Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3:397–422, 2002.
  • Auer et al. (2002) Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal of Computing, 32(1):48–77, 2002.
  • Beygelzimer and Langford (2009) Alina Beygelzimer and John Langford. The offset tree for learning with partial labels. In KDD, 2009.
  • Beygelzimer et al. (2011) Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert E. Schapire. Contextual bandit algorithms with supervised learning guarantees. In AISTATS, 2011.
  • Blum et al. (1999) Avrim Blum, Adam Kalai, and John Langford. Beating the holdout: Bounds for k-fold and progressive cross-validation. In COLT, 1999.
  • Chapelle and Li (2011) Olivier Chapelle and Lihong Li. An empirical evaluation of Thompson sampling. In NIPS, 2011.
  • Chu et al. (2011) Wei Chu, Lihong Li, Lev Reyzin, and Robert E. Schapire. Contextual bandits with linear payoff functions. In AISTATS, 2011.
  • Dudík et al. (2011a) Miroslav Dudík, Daniel Hsu, Satyen Kale, Nikos Karampatziakis, John Langford, Lev Reyzin, and Tong Zhang. Efficient optimal learning for contextual bandits. In UAI, 2011a.
  • Dudík et al. (2011b) Miroslav Dudík, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. In ICML, 2011b.
  • Helmbold and Schapire (1997) David P. Helmbold and Robert E. Schapire.

    Predicting nearly as well as the best pruning of a decision tree.

    Machine Learning, 27(1):51–68, 1997.
  • Langford (2014) John Langford. Interactive machine learning, January 2014. URL http://hunch.net/~jl/projects/interactive/index.html.
  • Langford and Zhang (2007) John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In NIPS, 2007.
  • Lewis et al. (2004) David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5:361–397, 2004.
  • Li (2013) Lihong Li. Generalized Thompson sampling for contextual bandits. CoRR, abs/1310.7163, 2013.
  • Li et al. (2010) Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. A contextual-bandit approach to personalized news article recommendation. In WWW, 2010.
  • McMahan and Streeter (2009) H. Brendan McMahan and Matthew Streeter. Tighter bounds for multi-armed bandits with expert advice. In COLT, 2009.
  • Sutton and Barto (1998) Richard S. Sutton and Andrew G. Barto. Reinforcement learning, an introduction. MIT Press, 1998.
  • Thompson (1933) William R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3–4):285–294, 1933.

Appendix A Omitted Algorithm Details

Algorithm 3 and Algorithm 4 give the details of the inverse propensity scoring transformation and the action sampling procedure .

0:  History .
0:  Data set .
1:  Initialize data set .
2:  for each  do
3:     Create fictitious rewards with and for all .
4:     .
5:  end for
6:  return  .
Algorithm 3
0:  Context , weights , default policy , minimum probability .
0:  Selected action and probability .
1:  Let (so ).
2:  Randomly draw action using the distribution
3:  Let .
4:  return  .
Algorithm 4

Appendix B Deviation Inequalities

b.1 Freedman’s Inequality

The following form of Freedman’s inequality for martingales is from Beygelzimer et al. (2011).

Lemma 9.

Let

be a sequence of real-valued random variables. Assume for all

, and . Define and . For any and , with probability at least ,

b.2 Variance Bounds

Fix the epoch schedule .

Define the following for any probability distribution over , , and :

(8)
(9)

The proof of the following lemma is essentially the same as that of Theorem 6 from Dudík et al. (2011a).

Lemma 10.

Fix any for . For any , with probability at least ,

for all probability distributions over , all , and all . In particular, if