Contextual Semibandits via Supervised Learning Oracles

02/20/2015 ∙ by Akshay Krishnamurthy, et al. ∙ University of Massachusetts Amherst Microsoft 0

We study an online decision making problem where on each round a learner chooses a list of items based on some side information, receives a scalar feedback value for each individual item, and a reward that is linearly related to this feedback. These problems, known as contextual semibandits, arise in crowdsourcing, recommendation, and many other domains. This paper reduces contextual semibandits to supervised learning, allowing us to leverage powerful supervised learning methods in this partial-feedback setting. Our first reduction applies when the mapping from feedback to reward is known and leads to a computationally efficient algorithm with near-optimal regret. We show that this algorithm outperforms state-of-the-art approaches on real-world learning-to-rank datasets, demonstrating the advantage of oracle-based algorithms. Our second reduction applies to the previously unstudied setting when the linear mapping from feedback to reward is unknown. Our regret guarantees are superior to prior techniques that ignore the feedback.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

NIPS2016

This project collects the different accepted papers and their link to Arxiv or Gitxiv


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Decision making with partial feedback, motivated by applications including personalized medicine (Robins, 1989) and content recommendation (Li et al., 2010)

, is receiving increasing attention from the machine learning community. These problems are formally modeled as

learning from bandit feedback, where a learner repeatedly takes an action and observes a reward for the action, with the goal of maximizing reward. While bandit learning captures many problems of interest, several applications have additional structure: the action is combinatorial in nature and more detailed feedback is provided. For example, in internet applications, we often recommend sets of items and record information about the user’s interaction with each individual item (e.g., click). This additional feedback is unhelpful unless it relates to the overall reward (e.g., number of clicks), and, as in previous work, we assume a linear relationship. This interaction is known as the semibandit feedback model.

Typical bandit and semibandit algorithms achieve reward that is competitive with the single best fixed action, i.e., the best medical treatment or the most popular news article for everyone. This is often inadequate for recommendation applications: while the most popular articles may get some clicks, personalizing content to the users is much more effective. A better strategy is therefore to leverage contextual information to learn a rich policy for selecting actions, and we model this as contextual semibandits. In this setting, the learner repeatedly observes a context (user features), chooses a composite action (list of articles), which is an ordered tuple of simple actions, and receives reward for the composite action (number of clicks), but also feedback about each simple action (click). The goal of the learner is to find a policy for mapping contexts to composite actions that achieves high reward.

We typically consider policies in a large but constrained class, for example, linear learners or tree ensembles. Such a class enables us to learn an expressive policy, but introduces a computational challenge of finding a good policy without direct enumeration. We build on the supervised learning literature, which has developed fast algorithms for such policy classes, including logistic regression and SVMs for linear classifiers and boosting for tree ensembles. We access the policy class exclusively through a supervised learning algorithm, viewed as an oracle.

In this paper, we develop and evaluate oracle-based algorithms for the contextual semibandits problem. We make the following contributions:

  • In the more common setting where the linear function relating the semibandit feedback to the reward is known, we develop a new algorithm, called VCEE, that extends the oracle-based contextual bandit algorithm of Agarwal et al. (2014). We show that VCEE enjoys a regret bound between and , depending on the combinatorial structure of the problem, when there are rounds of interaction, simple actions, policies, and composite actions have length .111Throughout the paper, the notation suppressed factors polylogarithmic in , , and . We analyze finite policy classes, but our work extends to infinite classes by standard discretization arguments. VCEE can handle structured action spaces and makes calls to the supervised learning oracle.

  • We empirically evaluate this algorithm on two large-scale learning-to-rank datasets and compare with other contextual semibandit approaches. These experiments comprehensively demonstrate that effective exploration over a rich policy class can lead to significantly better performance than existing approaches. To our knowledge, this is the first thorough experimental evaluation of not only oracle-based semibandit methods, but of oracle-based contextual bandits as well.

  • When the linear function relating the feedback to the reward is unknown, we develop a new algorithm called EELS. Our algorithm first learns the linear function by uniform exploration and then, adaptively, switches to act according to an empirically optimal policy. We prove an regret bound by analyzing when to switch. We are not aware of other computationally efficient procedures with a matching or better regret bound for this setting.

Algorithm Regret Oracle Calls Weights
VCEE (Thm. 1) known
-Greedy (Thm. A) known
Kale et al. (2010) not oracle-based known
EELS (Thm. 2) unknown
Agarwal et al. (2014) unknown
Swaminathan et al. (2016) unknown
Table 1: Comparison of contextual semibandit algorithms for arbitrary policy classes, assuming all rankings are valid composite actions. The reward is semibandit feedback weighted according to . For known weights, we consider ; for unknown weights, we assume .

See Table 1 for a comparison of our results with existing applicable bounds.

Related work.

There is a growing body of work on combinatorial bandit optimization Cesa-Bianchi and Lugosi (2012); Audibert et al. (2014) with considerable attention on semibandit feedback György et al. (2007); Kale et al. (2010); Chen et al. (2013); Qin et al. (2014); Kveton et al. (2015). The majority of this research focuses on the non-contextual setting with a known relationship between semibandit feedback and reward, and a typical algorithm here achieves an regret against the best fixed composite action. To our knowledge, only the work of Kale et al. (2010) and Qin et al. (2014) considers the contextual setting, again with known relationship. The former generalizes the Exp4 algorithm Auer et al. (2002) to semibandits, and achieves regret,222Kale et al. (2010) consider the favorable setting where our bounds match, when uniform exploration is valid. but requires explicit enumeration of the policies. The latter generalizes the LinUCB algorithm of Chu et al. (2011) to semibandits, assuming that the simple action feedback is linearly related to the context. This differs from our setting: we make no assumptions about the simple action feedback. In our experiments, we compare VCEE against this LinUCB-style algorithm and demonstrate substantial improvements.

We are not aware of attempts to learn a relationship between the overall reward and the feedback on simple actions as we do with EELS. While EELS uses least squares, as in LinUCB-style approaches, it does so without assumptions on the semibandit feedback. Crucially, the covariates for its least squares problem are observed after predicting a composite action and not before, unlike in LinUCB.

Supervised learning oracles have been used as a computational primitive in many settings including active learning 

Hsu (2010), contextual bandits Rakhlin and Sridharan (2016); Syrgkanis et al. (2016); Agarwal et al. (2014); Dudík et al. (2011), and structured prediction Daumé III et al. (2009).

2 Preliminaries

Let be a space of contexts and a set of simple actions. Let be a finite set of policies, , mapping contexts to composite actions. Composite actions, also called rankings, are tuples of distinct simple actions. In general, there are possible rankings, but they might not be valid in all contexts. The set of valid rankings for a context is defined implicitly through the policy class as .

Let be the set of distributions over policies, and

be the set of non-negative weight vectors over policies, summing to at most 1, which we call subdistributions. Let

be the 0/1 indicator equal to 1 if its argument is true and 0 otherwise.

In stochastic contextual semibandits, there is an unknown distribution over triples , where is a context, is the vector of reward features, with entries indexed by simple actions as , and is the reward noise, . Given and , we write for the vector with entries . The learner plays a -round game. In each round, nature draws and reveals the context . The learner selects a valid ranking and gets reward , where is a possibly unknown but fixed weight vector. The learner is shown the reward and the vector of reward features for the chosen simple actions , jointly referred to as semibandit feedback.

The goal is to achieve cumulative reward competitive with all . For a policy , let denote its expected reward, and let be the maximizer of expected reward. We measure performance of an algorithm via cumulative empirical regret,

(1)

The performance of a policy is measured by its expected regret, .

In personalized search, a learning system repeatedly responds to queries with rankings of search items. This is a contextual semibandit problem where the query and user features form the context, the simple actions are search items, and the composite actions are their lists. The semibandit feedback is whether the user clicked on each item, while the reward may be the click-based discounted cumulative gain (DCG), which is a weighted sum of clicks, with position-dependent weights. We want to map contexts to rankings to maximize DCG and achieve a low regret.

We assume that our algorithms have access to a supervised learning oracle, also called an argmax oracle, denoted AMO, that can find a policy with the maximum empirical reward on any appropriate dataset. Specifically, given a dataset of contexts , reward feature vectors with rewards for all simple actions, and weight vectors , the oracle computes

(2)

where is the th simple action that policy chooses on context . The oracle is supervised as it assumes known features for all simple actions whereas we only observe them for chosen actions. This oracle is the structured generalization of the one considered in contextual bandits Agarwal et al. (2014); Dudík et al. (2011) and can be implemented by any structured prediction approach such as CRFs Lafferty et al. (2001) or SEARN Daumé III et al. (2009).

Our algorithms choose composite actions by sampling from a distribution, which allows us to use importance weighting

to construct unbiased estimates for the reward features

. If on round , a composite action

is chosen with probability

, we construct the importance weighted feature vector with components , which are unbiased estimators of . For a policy , we then define empirical estimates of its reward and regret, resp., as

By construction, is an unbiased estimate of the expected reward , but is not an unbiased estimate of the expected regret . We use to denote empirical expectation over contexts appearing in the history of interaction .

Finally, we introduce projections and smoothing of distributions. For any and any subdistribution , the smoothed and projected conditional subdistribution is

(3)

where

is a uniform distribution over a certain subset of valid rankings for context

, designed to ensure that the probability of choosing each valid simple action is large. By mixing

into our action selection, we limit the variance of reward feature estimates

. The lower bound on the simple action probabilities under appears in our analysis as , which is the largest number satisfying

for all and all simple actions valid for . Note that when there are no restrictions on the action space as we can take to be the uniform distribution over all rankings and verify that . In the worst case, , since we can always find one valid ranking for each valid simple action and let be the uniform distribution over this set. Such a ranking can be found efficiently by a call to AMO for each simple action , with the dataset of a single point , where .

3 Semibandits with known weights

We begin with the setting where the weights are known, and present an efficient oracle-based algorithm (VCEE, see Algorithm 1) that generalizes the algorithm of Agarwal et al. (2014).

The algorithm, before each round , constructs a subdistribution , which is used to form the distribution by placing the missing mass on the maximizer of empirical reward. The composite action for the context is chosen according to the smoothed distribution (see Eq. (3)). The subdistribution is any solution to the feasibility problem (OP), which balances exploration and exploitation via the constraints in Eqs. (4) and (5). Eq. (4) ensures that the distribution has low empirical regret. Simultaneously, Eq. (5) ensures that the variance of the reward estimates remains sufficiently small for each policy , which helps control the deviation between empirical and expected regret, and implies that has low expected regret. For each , the variance constraint is based on the empirical regret of , guaranteeing sufficient exploration amongst all good policies.

0:  Allowed failure probability .
1:  , the all-zeros vector. . Define: .
2:  for round  do
3:     Let and .
4:     Observe , play (see Eq. (3)), and observe and .
5:     Define for each .
6:     Obtain by solving OP with and .
7:  end for

 

Semi-bandit Optimization Problem (OP)

With history and , define and . Find such that:

(4)
(5)
Algorithm 1 VCEE (Variance-Constrained Explore-Exploit) Algorithm

OP can be solved efficiently using AMO and a coordinate descent procedure obtained by modifying the algorithm of Agarwal et al. (2014). While the full algorithm and analysis are deferred to Appendix E, several key differences between VCEE and the algorithm of Agarwal et al. (2014) are worth highlighting. One crucial modification is that the variance constraint in Eq. (5) involves the marginal probabilities of the simple actions rather than the composite actions as would be the most obvious adaptation to our setting. This change, based on using the reward estimates for simple actions, leads to substantially lower variance of reward estimates for all policies and, consequently, an improved regret bound. Another important modification is the new mixing distribution and the quantity . For structured composite action spaces, uniform exploration over the valid composite actions may not provide sufficient coverage of each simple action and may lead to dependence on the composite action space size, which is exponentially worse than when is used.

The regret guarantee for Algorithm 1 is the following:

For any , with probability at least , VCEE achieves regret . Moreover, VCEE can be efficiently implemented with calls to a supervised learning oracle AMO.

In Table 1, we compare this result to other applicable regret bounds in the most common setting, where and all rankings are valid (). VCEE enjoys a regret bound, which is the best bound amongst oracle-based approaches, representing an exponentially better -dependence over the purely bandit feedback variant Agarwal et al. (2014) and a polynomially better -dependence over an -greedy scheme (see Theorem A in Appendix A). This improvement over -greedy is also verified by our experiments. Additionally, our bound matches that of Kale et al. (2010), who consider the harder adversarial setting but give an algorithm that requires an exponentially worse running time, , and cannot be efficiently implemented with an oracle.

Other results address the non-contextual setting, where the optimal bounds for both stochastic Kveton et al. (2015) and adversarial Audibert et al. (2014) semibandits are . Thus, our bound may be optimal when . However, these results apply even without requiring all rankings to be valid, so they improve on our bound by a factor when . This discrepancy may not be fundamental, but it seems unavoidable with some degree of uniform exploration, as in all existing contextual bandit algorithms. A promising avenue to resolve this gap is to extend the work of Neu Neu (2015), which gives high-probability bounds in the noncontextual setting without uniform exploration.

To summarize, our regret bound is similar to existing results on combinatorial (semi)bandits but represents a significant improvement over existing computationally efficient approaches.

4 Semibandits with unknown weights

We now consider a generalization of the contextual semibandit problem with a new challenge: the weight vector is unknown. This setting is substantially more difficult than the previous one, as it is no longer clear how to use the semibandit feedback to optimize for the overall reward. Our result shows that the semibandit feedback can still be used effectively, even when the transformation is unknown. Throughout, we assume that the true weight vector has bounded norm, i.e., .

One restriction required by our analysis is the ability to play any ranking. Thus, all rankings must be valid in all contexts, which is a natural restriction in domains such as information retrieval and recommendation. The uniform distribution over all rankings is denoted .

0:  Allowed failure probability . Assume .
1:  Set
2:  for  do
3:     Observe , play ( is uniform over all rankings), observe and .
4:  end for
5:  Let .
6:  .
7:  Set .
8:  Set .
9:  while  do
10:     . Observe , play , observe and .
11:     Set .
12:  end while
13:  Estimate weights (Least Squares).
14:  Optimize policy using importance weighted features.
15:  For every remaining round: observe , play .
Algorithm 2 EELS (Explore-Exploit Least Squares)

We propose an algorithm that explores first and then, adaptively, switches to exploitation. In the exploration phase, we play rankings uniformly at random, with the goal of accumulating enough information to learn the weight vector for effective policy optimization. Exploration lasts for a variable length of time governed by two parameters and . The parameter controls the minimum number of rounds of the exploration phase and is , similar to -greedy style schemes Langford and Zhang (2008). The adaptivity is implemented by the

parameter, which imposes a lower bound on the eigenvalues of the 2nd-moment matrix of reward features observed during exploration. As a result, we only transition to the exploitation phase after this matrix has suitably large eigenvalues. Since we make no assumptions about the reward features, there is no bound on how many rounds this may take. This is a departure from previous explore-first schemes, and captures the difficulty of learning

when we observe the regression features only after taking an action.

After the exploration phase of rounds, we perform least-squares regression using the observed reward features and the rewards to learn an estimate of . We use and importance weighted reward features from the exploration phase to find a policy with maximum empirical reward, . The remaining rounds comprise the exploitation phase, where we play according to .

The remaining question is how to set , which governs the length of the exploration phase. The ideal setting uses the unknown parameter of the distribution , where is the uniform distribution over all simple actions. We form an unbiased estimator of and derive an upper bound . While the optimal depends on , the upper bound suffices.

For this algorithm, we prove the following regret bound. For any and , with probability at least , EELS has regret . EELS can be implemented efficiently with one call to the optimization oracle. The theorem shows that we can achieve sublinear regret without dependence on the composite action space size even when the weights are unknown. The only applicable alternatives from the literature are displayed in Table 1, specialized to . First, oracle-based contextual bandits (Agarwal et al., 2014) achieve a better -dependence, but both the regret and the number of oracle calls grow exponentially with . Second, the deviation bound of Swaminathan et al. (2016), which exploits the reward structure but not the semibandit feedback, leads to an algorithm with regret that is polynomially worse in its dependence on and (see Appendix B). This observation is consistent with non-contextual results, which show that the value of semibandit information is only in factors Audibert et al. (2014).

Of course EELS has a sub-optimal dependence on , although this is the best we are aware of for a computationally efficient algorithm in this setting. It is an interesting open question to achieve regret with unknown weights.

5 Proof sketches

We next sketch the arguments for our theorems. Full proofs are deferred to the appendices.

Proof of Theorem 1: The result generalizes Agarwal et. al Agarwal et al. (2014), and the proof structure is similar. For the regret bound, we use Eq. (5) to control the deviation of the empirical reward estimates which make up the empirical regret . A careful inductive argument leads to the following bounds:

Here is a universal constant and is defined in the pseudocode. Eq. (4) guarantees low empirical regret when playing according to , and the above inequalities also ensure small population regret. The cumulative regret is bounded by , which grows at the rate given in Theorem 1. The number of oracle calls is bounded by the analysis of the number of iterations of coordinate descent used to solve OP, via a potential argument similar to Agarwal et al. (2014).

Proof of Theorem 2: We analyze the exploration and exploitation phases individually, and then optimize and to balance these terms. For the exploration phase, the expected per-round regret can be bounded by either or , but the number of rounds depends on the minimum eigenvalue , with defined in Steps 8 and 11. However, the expected per-round 2nd-moment matrix, , has all eigenvalues at least . Thus, after rounds, we expect , so exploration lasts about rounds, yielding roughly

Now our choice of produces a benign dependence on and yields a bound.

For the exploitation phase, we bound the error between the empirical reward estimates and the true reward . Since we know in this phase, we obtain

The first term captures the error from using the importance-weighted vector, while the second uses a bound on the error

from the analysis of linear regression (assuming

).

This high-level argument ignores several important details. First, we must show that using instead of the optimal choice in the setting of

does not affect the regret. Secondly, since the termination condition for the exploration phase depends on the random variable

, we must derive a high-probability bound on the number of exploration rounds to control the regret. Obtaining this bound requires a careful application of the matrix Bernstein inequality to certify that has large eigenvalues.

6 Experimental Results

Our experiments compare VCEE with existing alternatives. As VCEE generalizes the algorithm of Agarwal et al. (2014), our experiments also provide insights into oracle-based contextual bandit approaches and this is the first detailed empirical study of such algorithms. The weight vector in our datasets was known, so we do not evaluate EELS. This section contains a high-level description of our experimental setup, with details on our implementation, baseline algorithms, and policy classes deferred to Appendix C. Software is available at http://github.com/akshaykr/oracle_cb.

Data: We used two large-scale learning-to-rank datasets: MSLR MSLR and all folds of the Yahoo! Learning-to-Rank dataset Chapelle and Chang (2011). Both datasets have over 30k unique queries each with a varying number of documents that are annotated with a relevance in . Each query-document pair has a feature vector ( for MSLR and for Yahoo!) that we use to define our policy class. For MSLR, we choose documents per query and set , while for Yahoo!, we set and . The goal is to maximize the sum of relevances of shown documents () and the individual relevances are the semibandit feedback. All algorithms make a single pass over the queries.

Algorithms: We compare VCEE

, implemented with an epoch schedule for solving OP after

rounds (justified by Agarwal et al. (2014)), with two baselines. First is the -Greedy approach Langford and Zhang (2008), with a constant but tuned . This algorithm explores uniformly with probability and follows the empirically best policy otherwise. The empirically best policy is updated with the same schedule.

We also compare against a semibandit version of LinUCB Qin et al. (2014). This algorithm models the semibandit feedback as linearly related to the query-document features and learns this relationship, while selecting composite actions using an upper-confidence bound strategy. Specifically, the algorithm maintains a weight vector

formed by solving a ridge regression problem with the semibandit feedback

as regression targets. At round , the algorithm uses document features and chooses the documents with highest value. Here, is the feature 2nd-moment matrix and is a tuning parameter. For computational reasons, we only update and every 100 rounds.

Oracle implementation: LinUCB only works with a linear policy class. VCEE and -Greedy

work with arbitrary classes. Here, we consider three: linear functions and depth-2 and depth-5 gradient boosted regression trees (abbreviated Lin, GB2 and GB5). Both GB classes use 50 trees. Precise details of how we instantiate the supervised learning oracle can be found in Appendix 

C.

Figure 1: Average reward as a function of number of interactions for VCEE, -Greedy, and LinUCB on MSLR (left) and Yahoo (right) learning-to-rank datasets.

Parameter tuning: Each algorithm has a parameter governing the explore-exploit tradeoff. For VCEE, we set and tune , in -Greedy we tune , and in LinUCB we tune . We ran each algorithm for 10 repetitions, for each of ten logarithmically spaced parameter values.

Results: In Figure 1, we plot the average reward (cumulative reward up to round divided by ) on both datasets. For each , we use the parameter that achieves the best average reward across the 10 repetitions at that . Thus for each , we are showing the performance of each algorithm tuned to maximize reward over rounds. We found VCEE was fairly stable to parameter tuning, so for VC-GB5 we just use one parameter value () for all

on both datasets. We show confidence bands at twice the standard error for just

LinUCB and VC-GB5 to simplify the plot.

Qualitatively, both datasets reveal similar phenomena. First, when using the same policy class, VCEE consistently outperforms -Greedy. This agrees with our theory, as VCEE achieves -type regret, while a tuned -Greedy achieves at best a rate.

Secondly, if we use a rich policy class, VCEE can significantly improve on LinUCB, the empirical state-of-the-art, and one of few practical alternatives to -Greedy. Of course, since -Greedy does not outperform LinUCB, the tailored exploration of VCEE is critical. Thus, the combination of these two properties is key to improved performance on these datasets. VCEE is the only contextual semibandit algorithm we are aware of that performs adaptive exploration and is agnostic to the policy representation. Note that LinUCB is quite effective and outperforms VCEE with a linear class. One possible explanation for this behavior is that LinUCB, by directly modeling the reward, searches the policy space more effectively than VCEE, which uses an approximate oracle implementation.

7 Discussion

This paper develops oracle-based algorithms for contextual semibandits both with known and unknown weights. In both cases, our algorithms achieve the best known regret bounds for computationally efficient procedures. Our empirical evaluation of VCEE, clearly demonstrates the advantage of sophisticated oracle-based approaches over both parametric approaches and naive exploration. To our knowledge this is the first detailed empirical evaluation of oracle-based contextual bandit or semibandit learning. We close with some promising directions for future work:

  • With known weights, can we obtain regret even with structured action spaces? This may require a new contextual bandit algorithm that does not use uniform smoothing.

  • With unknown weights, can we achieve a dependence while exploiting semibandit feedback?

Acknowledgements

This work was carried out while AK was at Microsoft Research.

Appendix A Analysis of -Greedy with Known Weights

We analyze the -greedy algorithm (Algorithm 3) in the known-weights setting when all rankings are valid, i.e., . This algorithm is different from the one we use in our experiments in that it is an explore-first variant, exploring for the first several rounds and then exploiting for the remainder. In our experiments, we use a variant where at each round we explore with probability and exploit with probability . This latter version also has the same regret bound, via an argument similar to that of Langford and Zhang (2008).

0:  Allowed failure probability .
  Set .
  Let be the uniform distribution over all rankings.
  For , observe , play , observe and .
  Optimize policy using importance-weighted features.
  For every remaining round: observe , play .
Algorithm 3 -Greedy for Contextual Semibandits with Known Weights

For any , when , with probability at least , the regret of Algorithm 3 is at most .

Proof.

The proof relies on the uniform deviation bound similar to Lemma F.5, which we use for the analysis of EELS. We first prove that for any , with probability at least , for all policies , we have

(6)

This deviation bound is a consequence of Bernstein’s inequality. The quantity on the left-hand side is the average of terms

all with expectation zero, because is unbiased. The range of each term is bounded by the Cauchy-Schwarz inequality as

because under uniform exploration the coordinates of are bounded in while the coordinates of are in and these are -dimensional vectors. The variance is bounded by the second moment, which we bound as follows:

since under uniform exploration. Plugging these bounds into Bernstein’s inequality gives the deviation bound of Eq. (6).

Now we can prove the theorem. Eq. (6) ensures that after collecting samples, the expected reward of the empirical reward maximizer is close to , the best achievable reward. The difference between these two is at most twice the right-hand side of the deviation bound. If we perform uniform exploration for rounds, we are ensured that with probability at least the regret is at most

Regret

For our setting of , the bound is

Under the assumption on , the second term is lower order, which proves the result. ∎

Appendix B Comparisons for Eels

In this section we do a detailed comparison of our Theorem 2 to the paper of Swaminathan et al. (2016), which is the most directly applicable result. We use notation consistent with our paper.

Swaminathan et al. (2016) focus on off-policy evaluation in a more challenging setting where no semibandit feedback is provided. Specifically, in their setting, in each round, the learner observes a context , chooses a composite action (as we do here) and receives reward . They assume that the reward decomposes linearly across the action-position pairs as

With this assumption, and when exploration is done uniformly, they provide off-policy reward estimation bounds of the form

This bound holds for any policy with probability at least for any . (See Theorem 3 and the following discussion in Swaminathan et al. (2016).) Note that this assumption generalizes our unknown weights setting, since we can always define .

To do an appropriate comparison, we first need to adjust the scaling of the rewards. While Swaminathan et al. (2016) assume that rewards are bounded in , we only assume bounded ’s and bounded noise. Consequently, we need to adjust their bound to incorporate this scaling. If the rewards are scaled to lie in , their bound becomes

This deviation bound can be turned into a low-regret algorithm by exploring for the first rounds, finding an empirically best policy, and using that policy for the remaining rounds. Optimizing the bound in leads to a -style regret bound: The approach of Swaminathan et al. (2016) with rewards in leads to an algorithm with regret bound

This algorithm can be applied as is to our setting, so it is worth comparing it to EELS. According to Theorem 2, EELS has a regret bound

The dependence on , and match between the two algorithms, so we are left with and the scale factors . This comparison is somewhat subtle and we use two different arguments. The first finds a conservative value for in Fact B in terms of and . This is the regret bound one would obtain by using the approach of Swaminathan et al. (2016) in our precise setting, ignoring the semibandit feedback, but with known weight-vector bound . The second comparison finds a conservative value of in terms of and .

For the first comparison, recall that our setting makes no assumptions on the scale of the reward, except that the noise is bounded in , so our setting never admits . If we begin with a setting of , we need to conservatively set , which gives the dependence

The EELS bound is never worse than the bound in Fact B according to this comparison. At , the two bounds are of the same order, which is . For , the EELS bound is at most , while for the first term in the EELS bound is at most the first term in the Swaminathan et al. (2016) bound. In both cases, the EELS bound is superior. Finally when , the second term dominates our bound, so EELS demonstrates an improvement.

For the second comparison, since our setting has the noise bounded in , assume that and that the total reward is scaled in as in Fact B. If we want to allow any , the tightest setting of is between and (depending on the structure of the positive and negative coordinates of ). For simplicity, assume is a bound on . Since the EELS bound depends on , a bound on the Euclidean norm of , we use to obtain a conservative setting of . This gives the dependence

Since , the EELS bound is superior whenever . Moreover, if , i.e., at least positions are relevant, the second term dominates our bound, and we improve by a factor of . The EELS bound is inferior when , which corresponds to a high-sparsity case since is also a bound on in this comparison.

Appendix C Implementation Details

c.1 Implementation of Vcee

VCEE is implemented as stated in Algorithm 1 with some modifications, primarily to account for an imperfect oracle. OP is solved using the coordinate descent procedure described in Appendix E.

We set in our implementation and ignore the log factor in . Instead, since , we use and tune , which can compensate for the absence of the factor. This additionally means that we ignore the failure probability parameter . Otherwise, all other parameters and constants are set as described in Algorithm 1 and OP.

As mentioned in Section 6, we implement AMO via a reduction to squared loss regression. There are many possibilities for this reduction. In our case, we specify a squared loss regression problem via a dataset where , is any list of actions, assigns a value to each action, and assigns an importance weight to each action. Since in our experiments , we do not need to pass along the vectors described in Eq. (2).

Given such a dataset , we minimize a weighted squared loss objective over a regression class ,

(7)

where is a feature vector associated with the given query-document pair. Note that we only include terms corresponding to simple actions in for each . This regression function is associated with the greedy policy that chooses the best valid ranking according to the sum of rewards of individual actions as predicted by on the current context.

We access this oracle with two different kinds of datasets. When we access AMO to find the empirically best policy, we only use the history of the interaction. In this case, we only regress onto the chosen actions in the history and we let be their importance weights. More formally, suppose that at round , we observe context , choose composite action and receive feedback . We create a single example where is the context, is the chosen composite action, has and . Observe that when this sample is passed into Eq. (7), it leads to a different objective than if we regressed directly onto the importance-weighted reward features .

We also create datasets to verify the variance constraint within OP. For this, we use the AMO in a more direct way by setting to be a list of all actions, letting be the importance weighted vector, and .

We use this particular implementation because leaving the importance weights inside the square loss term introduces additional variance, which we would like to avoid.

The imperfect oracle introduces one issue that needs to be corrected. Since the oracle is not guaranteed to find the maximizing policy on every dataset, in the th round of the algorithm, we may encounter a policy that has , which can cause the coordinate descent procedure to loop indefinitely. Of course, if we ever find a policy with , it means that we have found a better policy, so we simply switch the leader. We found that with this intuitive change, the coordinate descent procedure always terminates in a few iterations.

c.2 Implementation of -Greedy

Recall that we run a variant of -Greedy where at each round we explore with probability and exploit with probability , which is slightly different from the explore-first algorithm analyzed in Appendix A.

For -Greedy, we also use the oracle defined in Eq. (7). This algorithm only accesses the oracle to find the empirically best policy, and we do this in the same way as VCEE does, i.e., we only regress onto actions that were actually selected with importance weights encoded via s. We use all of the data, including the data from exploitation rounds, with importance weighting.

c.3 Implementation of LinUCB

The semibandit version of LinUCB uses ridge regression to predict the semibandit feedback given query-document features . If the feature vectors are in dimensions, we start with and , the all zeros vector. At round , we receive the query-document feature vectors for query and we choose

Since in our experiment we know that and all rankings are valid, the order of the documents is irrelevant and the best ranking consists of the top simple actions with the largest values of the above “regularized score”. Here is a parameter of the algorithm that we tune.

After selecting a ranking, we collect the semibandit feedback . The standard implementation would perform the update

which is the standard online ridge regression update. For computational reasons, we only update every 100 iterations, using all of the data. Thus, if , we set and . If , we set

c.4 Policy Classes

As AMO for both VCEE and -Greedy, we use the default implementations of regression with various function classes in scikit-learn version 0.17. We instantiate scikit-learn model objects and use the fit() and predict() routines. The model objects we use are

  • sklearn.linear_model.LinearRegression()

  • sklearn.ensemble.GradientBoostingRegressor(n_estimators=50,max_depth=2)

  • sklearn.ensemble.GradientBoostingRegressor(n_estimators=50,max_depth=5)

All three objects accommodate weighted least-squares objectives as required by Eq. (7).

Appendix D Proof of Regret Bound in Theorem 1

The proof hinges on two uniform deviation bounds, and then a careful inductive analysis of the regret using the OP. We only need our two deviation bounds to hold for the rounds in which . Let . These rounds then start at

Note that since and . From the definition of , we have for all :

(8)

The first deviation bound shows that the variance estimates used in Eq. (5) are suitable estimators for the true variance of the distribution. To state this deviation bound, we need some definitions: