Modeling and Simultaneously Removing Bias via Adversarial Neural Networks

04/18/2018 ∙ by John Moore, et al. ∙ Microsoft 0

In real world systems, the predictions of deployed Machine Learned models affect the training data available to build subsequent models. This introduces a bias in the training data that needs to be addressed. Existing solutions to this problem attempt to resolve the problem by either casting this in the reinforcement learning framework or by quantifying the bias and re-weighting the loss functions. In this work, we develop a novel Adversarial Neural Network (ANN) model, an alternative approach which creates a representation of the data that is invariant to the bias. We take the Paid Search auction as our working example and ad display position features as the confounding features for this setting. We show the success of this approach empirically on both synthetic data as well as real world paid search auction data from a major search engine.



There are no comments yet.


page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

A central task in an online advertising system is estimating the potential click-through rate (CTR) of an ad given a query, or

PClick. Using this PClick estimate and an advertiser’s bid, we run an auction to determine where we should place ads on a page. These ad impressions and their corresponding features are used to train new PClick models (potentially in an online fashion (McMahan et al., 2013)). Hence, online advertising suffers from a feedback loop where previously shown ads dominate the training set, and ads higher on the page comprise the majority of the positive samples (clicks). This bias makes estimating a good PClick across all ads, queries and positions (or ) difficult, due to the overrepresentation of features correlating with high click-through rate dominating the feature space.

Figure 1. The Adversarial Neural Network representation best viewed in color. The green area is optimized via , which predicts the y variable (Click) and has a regularization for the distance of (position CTR) from noise. Conversely, the orange parameters are optimized with respect to the bias network.

We hypothesize that the position of an ad on a page (e.g., mainline, sidebar or bottom) can summarize a large portion of the PClick bias. In effect, we aim to learn a PClick representation that is invariant to the position an ad is shown, that is, all potential ads retain a single relative ranking given a position on the page. Although we can easily enforce this on the position feature itself by using a linear function, the intrinsic bias of the other features relative to position is not easily removed.

To learn this position invariant feature PClick model, we turn to adversarial neural networks (ANNs). ANNs are models with competing loss functions that are optimized in tandem (e.g., (Goodfellow et al., 2014)), recent work (Abadi and Andersen, 2016; Louppe et al., 2016) has used them to hide or encrypt data. Our ANN representation consists of four networks: Base, Prediction, Bias, and Bypass Networks (Figure 1). The final PClick prediction used online is the result of a linear combination of the outputs from the Bypass and Prediction networks to predict

. However, during training these predictors compete with the Bias network adversary. This Bias network attempts to make predictions of the position using only the low rank vector

produced from the Base network. Correspondingly, the Prediction, Bypass and Base networks optimize an augmented loss function that penalizes the Bias network. The result is the vector is largely uncorrelated with position before being passed into the Prediction network.

Other approaches to overcome position/display biases in online advertising exist, such as multi-armed bandit methods aid in generating less biased training data (Thompson, 1933; Chapelle et al., 2014) and covariate shift (Shimodaira, 2000). However, each of these require sufficiently large samples from an exploration set to produce better estimates. In practice, it is difficult to obtain sufficient amounts of exploration data as it typically impacts revenue significantly. Our ANN approach requires no exploration and can be applied to an existing dataset.

To test the efficacy of the model, we show evaluations on real-world data and synthetic experiments. We generate two sets of synthetic data to mimic the feedback loop present in an online Ads system, and show that systematic and user position biases are handled by the ANN to produce more accurate estimates.

We also demonstrate that there is a tradeoff between bias removal in the model while optimizing over CTR. In evaluations, we show that by leveraging this tradeoff the ANN architecture has the ability to recover a more accurate estimate on unbiased datasets used in both synthetic and real-world datasets.

Our main contributions are the following:

  • A novel ANN representation for removing position bias in online advertising

  • Specifying a differentiable squared covariance loss to enable adversarial optimization over bias components.

  • Introducing a bypass structure to model position separately and linearly.

  • Detailed synthetic data generation evaluations to demonstrate the feedback problem present in online Ads systems.

2. Position Bias in Paid Search

The feedback problem in ML applications is common. To demonstrate it, we focus on the problem of Click-through rate or PClick prediction in paid search advertising. A standard Ad selection stack consists of a selection system, a model phase, and an auction phase(Hillard et al., 2011)

. The selection system determines the subset of ads that are passed to the model. The model attempts to estimate the full probability density function across distribution

, which is the entire Ads, Queries, and Positions space. Specifically, we estimate . In the auction phase, advertisers bid for keywords that are matched against queries. Ads and their positions are finally selected given PClicks and advertiser bids. We are mainly concerned about the model phase or PClick model in this work.

It is difficult to estimate for a couple of reasons. First, an online Ads system samples from a small, biased part of . A machine learning model estimates PClick by using a variety of features across Ad and Query. Many of the rich features are counting features, which aggregate counting information across an Ad and Query’s past (e.g. the percentage of clicks that this Ad/Query combination yielded in the past). Query Ad pairs that are frequently presented in the Ads stack have rich informative feature information; however, Query Ad pairs that have never been seen or seen rarely will not have this rich information. Thus, it is naturally hard for a model to promote the ranking of a Query Ad pair that it has not shown online before, and the feedback loop continues.

Second, a feedback loop forms between training data and PClick model. New training data or the ads that are subsequently shown online is formed from rankings from the previous model, and a new PClick model is formed from previous training data. Thus, the resulting Feedback Loop (FL) data is biased towards past models and training data.

The Position Click-through rate, , is the probability an ad is clicked given only the ad position on a page. This is calculated by averaging the CTRs of ads shown online in a given position. Ads in higher ranked positions typically yield higher CTRs. Prior work has attempted to model or explain why position bias exists (Craswell et al., 2008). In our setting, we hypothesize that of past ads summarize much of these issues present in an online ads machine learning system since ads with higher Position CTRs are more likely to have an overrepresentation of features correlated with high PClicks.

In the ideal scenario, a PClick model will be trained only using a large amount of randomly and uniformly sampled (RUS) data from . A central goal of an online Ads stack, though, is ad revenue. In practice it is not possible to obtain a substantially large randomly sampled data set since it is costly to show many randomly paired Ads and queries online.

3. Background

3.1. Online Advertising

These issues with biased FL training data could be mitigated by framing the problem in terms of multi-armed bandits (Thompson, 1933). The central issue behind the multi-armed bandits problem is to find a reasonable Exploration and Exploitation tradeoff.

In the Paid Search Advertising context, pulling an arm corresponds to selecting an ad to display (Chapelle et al., 2014). Exploration practically means allowing ads with lower click probability estimates to sometimes appear online over ads with the highest estimates leading to a potential loss of short-term revenue. Exploitation is preferring ads with the highest estimates typically resulting in immediate ad revenue gains.

Bandit Algorithms have seen success in the display advertising literature and related areas such as news recommendation (Tang et al., 2014; Li et al., 2010)

. Thompson sampling is a popular method used in this literature that corresponds to drawing an arm according to its probability of being optimal and is preferred for its performance and simplicity

(Thompson, 1933; Chapelle and Li, 2011; Chapelle et al., 2014).

These methods work best under the assumption that enough ads could be explored. In an online machine learning system, this is increasingly not the case as medium-term and even short-term revenue losses are not acceptable. A small sample of exploration data can be obtained, but it is generally too costly to obtain enough exploration data to have a substantial impact on the training set. Therefore, mostly biased FL data is still used for training a model, and these issues still remain.

Another approach to tackling this problem is answering the counterfactual question (Bottou et al., 2013). Bottou et al. show how to utilize counterfactual methodology from causal inference literature. Their methodology does not directly try to optimize performance on data sampled from

, but it will rather estimate how different PClick modelswould have performed in the past online. The authors develop importance sampling techniques that estimate counterfactual expectations of interest with confidence interval bounds.

Covariate shift is a related issue where the assumption is that remains the same across training and testing distributions where Y are labels and X are features. However, shifts or changes from training to testing distributions. Similar to counterfactual literature, there is work to rebalance the loss function in order to reflect this change in the test set by multiplying each instance by (Shimodaira, 2000). However, determining whenever the test set does not have sufficient samples becomes difficult. The RUS dataset in our setting is not large enough to represent the entire distribution .

3.2. Adversarial Networks

Adversarial networks became popular recently, especially as a part of generative models in the context of Generative Adversarial Networks (GANs). In GANs, the goal is to build a generative model that create realistic examples from some domain by optimizing two loss functions simultaneously between a generator and discriminator network (Goodfellow et al., 2014).

Adversarial networks are used for other purposes as well. (Abadi and Andersen, 2016) proposed using adversarial networks as a way to produce some level of encryption to data. The goal is to hide information from an adversary while being able to send information to a designated receiver. Neural Cryptographic systems do so by optimizing two loss functions in an adversarial fashion. The first loss function can be seen as trying to encrypt the data, while the second attempts to decrypt the data adversarially. The absolute covariance function can be defined as part of this encryption loss function.

In addition to encrypting data, adversarial optimization has been proposed when dealing with nuissance variables or variables that should not be correlated with the output prediction distribution (Louppe et al., 2016). This method uses a similar architecture and optimization technique as GANs. However, instead of generating data, they penalize the first network if it produces predictions that can be used to predict the nuissance variable. Similar to the discriminator, the second network attempts to predict the nuissance variable. This work is distinct from ours for a couple of reasons. We are not interested in decorrelating predictions with position bias. We are interested in a partial representation of features that are decorrelated with this bias, while still modeling the bias. Furthermore, the training distribution derived from an online Ads stack is a biased sample from

4. Method Description

We develop an Adversarial Neural Network (ANN) architecture to produce accurate PClick predictions, , given a biased Feedback Loop training set. We assume a continuous valued feature, that summarizes this bias. We define as position CTR or in the Ads context. A set of input features, are typically weakly correlated with .

4.1. Network Architecture

The ANN representation consists of a Base, Prediction, Bias, and a Bypass Network shown in Figure 1 with parameters , , , for each of the networks, respectively. The first component, the Base and Prediction networks, is optimized to be independent, while the second component, the Bypass network, depends only on . By decomposing the model in this way, the ANN can learn from the data even when the bias exists.

The Bypass structure directly uses by incorporating its last hidden state as a linear term in the final sigmoid prediction function of equation 1. The set of final hidden states used for predicting will consist of a linear combination of activations from both the Prediction and the Bypass Network. Let


where refer the final hidden activations at the end of the Prediction network, are the weights multiplied with and is defined similarly for the Bypass Network. is a standard linear offset bias term.

This linear bypass strategy on allows the ANN to model separately and preserves the relative rankings across ’s (e.g. an ad will have a higher Click prediction if it has a higher value or position CTR) while directly incorporating

Given , the Base Network outputs a set of hidden activations, that are fed as inputs to both the Prediction and Bias networks as illustrated in Figure 1. is used to predict well, while predicting poorly.

4.2. Loss Functions

To accomplish the desired set of hidden activations, we minimize two competing loss functions, the bias loss, , and the noisy loss, .


The bias loss function is defined in Equation 2. This loss function measures how well the Bias network can predict given . In Figure 1, only the Bias network (orange) and are optimized with respect to this loss function, while keeping all other parameters constant.

Equation 3 describes the noisy loss function, which optimizes over , , , while keeping constant. This loss consists of to represent the prediction loss and can be defined in various ways. In this work we define the in terms of binary cross entropy.


is a function of sample covariance and is computed by calculating means across ’s and ’s in a given minibatch.


represents the distance are from predicting noise. The squared covariance is 0 when is not positively or negatively correlated with . When there is high correlation, would be highly penalized as long as is sufficiently large.

The resulting objective function therefore penalizes the model for both poor predictions, and the ability for to recover or (where an ad was placed on the page). controls how much each term is emphasized relative to the other.

4.3. Learning

In practice, the covariance function is calculated across each minibatch individually where means are computed from each minibatch. Both loss functions, and

are optimized alternatively via stochastic gradient descent on the same minibatch (lines 5-6).

4.4. Online Inference

To predict in an online setting or on a test set, we disregard the Bias network and use the other three networks to produce predictions, . In the context of an online Ads system, we set to Position 1 CTR for data not seen online in the past, which is then fed into the Bypass network.

1:   Create Base, Prediction, Bias, and Bypass networks with , , , corresponding to parameters of each network
2:   Split , , into minibatches
3:   repeat
4:       Optimize with respect to ,,
5:       Optimize with respect to
7:   until 
8:   return  DNN
Algorithm 1 Train(, , , )

5. Synthetic Evaluations on System Level Bias

Figure 2. Training data generated at each step of showing ads online.
Figure 3. Top 2 instances ranked by are selected from 10,000 candidates sampled without replacement
Position 1 on 0.464
Position 2 on 0.414
Position 1 on 0.454
Position 2 on 0.396
Position 1 over all days 0.408
Position 2 over all days 0.378
Table 1. Position CTRs after System Level Bias synthetic evaluation
Average Position CTR on FL (last 2 days) 0.432
MSE on FL using Average 0.000782
Table 2. A naive approach that just predicts the Average CTR. This forms an upper bound on MSE for FL data.
AUC 0.775
Log Loss 0.277
Table 3.

Training and Testing using Logistic Regression on HeldOut data derived from

. This forms an upper bound on AUC for the HeldOut set.

We generate synthetic data to illustrate the natural feedback loop or system level bias present in an online advertising stack. We first generate click labels according to a bernoulli with probability where represents a clicked ad. Then, feature vectors,

are generated from two different but overlapping normal distributions according to

where we set .

This process forms a complete distribution , and 100k samples are taken to form a large Reservoir dataset. We then represent the feedback loop by simulating an iterative process where previously top ranked ’s ( or ads) are used as training data to rank the next ads. Figures 2 and 3 shows this feedback loop process, and Algorithm 2 demonstrates the simulation.

candidate sets of 10,000 instances are drawn at random without replacement from the underlying Reservoir on day where K=500. The model trained on day ranks the top 2 ads in each candidate set to show to the user on day . Labels are revealed on day , which subsequently forms the next iteration of available training data.

We repeat this process until a desired number of iterations, T=100. At each iteration, we record the average position CTR, , for each of the top 2 positions. refers to the top ranked ads, and are the 2nd top ranked ads. We treat the position CTRs as the continuous bias term . To start this process, we sample instances from the Reservoir to form . In an online Ads system, multiple days of training data are typically used to reduce systematic bias. In the following evaluations we utilize the last two days of available training data (i.e. trains on and ). Each model

is a logistic regression classifier with l2 regularization. We set the parameter

in line 13 of Algorithm 2 to illustrate a system level feedback loop bias. We form testing data, separate from this feedback loop process, or HeldOut RUS evaluation by sampling 100k samples from .

CTRs for each position are shown in table 1 on the last two days, and the overall CTRs calculated over all days. All 4 CTR values are equally likely, since they are each associated with 250 training examples. Therefore, a naive approach should predict the average CTR values. This forms an upper bound on how well an adversarial Bias Network can predict . We record in table 2 the average CTR over the last two days of data (4 values) and calculate the MSE using this value.

1:   Draw 100k labels according to
2:   Reservoir = 100k labels with 10 features according to
4:   HeldOut = draw a separate Reservoir with 100k samples
5:    = draw K samples from Reservoir and set
6:   for (i = 0; i< T; i++) do
7:       Train on ,
8:       Set to have 0 samples
9:       for (k = 0; k< K/2; k++) do
10:           candidates = draw 10,000 samples from Reservoir
11:           Retrieve top 2 candidates (Position 1 and 2) using
12:           if Position 2 ad has Click==1 then
13:               Set Position 2 ad Click=0 with probability
14:           end if
15:           Add results to
16:       end for
17:   end for
18:   Calculate , or Position CTRs
19:   return  , , , , HeldOut
Algorithm 2 SyntheticFeedback(, , )

5.1. Setup

We seek a model which is trained on FL data (i.e. the last two days of the synthetic generation process) but able to generalize to or our RUS HeldOut data. We train a set of ANNs using this FL data with different ’s and set

to its Position CTR. The hyperbolic tangent function is used for all of the hidden activations except the last layers. The output activation function of the Prediction network is a sigmoid, and the output activation of the Bias Network is linear. The Bypass network consists of 1 hidden layer with 1 node, while the Base, Prediction, and Bias networks consist of 1 hidden layer with 10 nodes each. We perform stochastic gradient descent with minibatch size=100 and a learning rate=0.01. We train for 100 epochs (or passes) over the FL data. After this main training process, we allow the Bias network to train over

for 100 epochs. Ideally, this allows the Bias network to do its best to predict given produced from the Base network.

For comparison, we perform the same evaluations for an ANN with . This model can be seen as a complete independent vanilla neural network optimizing over , while a separate Bias network is able to observe and optimize without changing the Base Network. We run 10 trials of each model with different weight initializations and report averaged Area under ROC curve (AUCs) on and averaged mean squared errors (MSEs) on .

5.2. Main Synthetic Results

(a) AUC on the FL training set
(b) MSE on the FL training set
(c) AUC on the RUS testing set
(d) MSE on the RUS testing set
(e) Averaged differences in prediction between using Position 1 CTR vs Position 2 CTR in the linear bypass for RUS data
Figure 4. Training the ANN model using FL data with two days or 1000 samples. Both the ANN with bypass (byp) and a variant of the ANN without bypass (no_byp) results are reported.

To evaluate on an unbiased sample from , we use the position 1 CTR, 0.464, derived from the last day,

Table 3 shows the AUC and Log Loss for the HeldOut data drawn from by training a logistic regression model on this dataset. This is the ideal situation that forms an upper bound on AUC.

In addition to the ANN architecture with a Bypass network, we show performance on a variant of the ANN without the Bypass network. Figure 4 shows AUCs and MSEs on the FL and RUS datasets at the end of training. The x-axis is a reverse log scale varying from 0 to 0.99999. As increases, the MSE mostly increases at the expense of FL AUC error. The ANN with Bypass network’s FL MSE error goes back down to 0.00078 (shown in Table 2), which is the same performance of a naive method that only averages CTR.

We note that as approaches 1, the term in becomes diluted, so there should be a set of that are optimal in terms of AUC on , which is seen empirically in Figure (c)c.

The ANN model yields as much as a 12.6% gain in AUC from the models over on the RUS set and is only 10% off from a model trained solely on the RUS set.

5.3. Bypass vs non Bypass Results

The results in Figures 4 show slight improvements using the Bypass vs the non-Bypass ANN both in terms of AUC and higher MSEs on the RUS dataset. We also analyze the differences in predictions of the bypass network as given different position CTRs. We feed Position 1 CTR on as input to the Bypass network along with features to produce predictions, and do the same for Position 2 CTR on to create .

We compute the average prediction differences over all trials or for each value. Figure (e)e illustrates these results. As MSE increases, so do the difference in predictions. Therefore, we hypothesize that the Bypass network is increasingly explaining the position CTR in the ANN representation. The non-bypass network, on the other hand, can only produces the same CTR estimate despite different position CTRs.

6. Synthetic Evaluations on User Level Bias

Another factor that causes Position bias may be a User level bias. Users may be biased towards not clicking on ads below Position 1 regardless of relevance and user interest. We simulate an additional User level Bias towards Position information by perturbing the Position 2 ranked ads’ labels in the previous synthetic evaluations. Lines 12-14 of Algorithm 2 accomplish this by switching the observed Click label of Position 2 ads from 1 to 0 with probability . The previous synthetic data generation process in section 5 is just a special case of this one with . We perform the same experiments as in section 5.2 except with a new FL dataset based on r=25%.

6.1. Results

(a) AUC on the FL training set
(b) MSE on the FL training set
(c) AUC on the RUS testing set
(d) MSE on the RUS testing set
Figure 5. Training the ANN model using FL data with r=25% to simulate a user level position bias. We test on FL and RUS datasets using the vanilla ANN with bypass.

AUCs and MSEs on the FL and RUS data are reported in Figure 5 for both FL and RUS Heldout datasets. These have similar results to Figure 4. Though the highest AUC on the RUS data for r=25% is less than the highest AUC for r=0%, there is as much as a 19% increase between the model’s AUC vs the model’s AUC. Therefore, these results empirically indicate that when more bias is added to the data, the ANN representation with appropriate has higher gains compared to a ANN.

7. Real World Data Evaluation

In real-world machine learned systems, significant loss in terms of AUC on FL datasets is not desired as samples from this space are often shown online. However, a model, , that shows large gains on for slight losses on FL data is preferred over a model, , that does not show gains over since is more likely to perform well online.

In these evaluations, we show AUC gains on for acceptable AUC losses on FL data by using datasets from a major search engine’s online Ads stack. The first form of data is an FL dataset consisting of 500 million samples. The second is an RUS dataset with 100k samples.

7.1. Setup

The hyperbolic tangent function is used for all of the hidden activations except the last layers. The Base Network is composed of 2 layers with 300 and 150 nodes, respectively. The Prediction Network has one hidden layer with 300 nodes, while the Bias Network is defined similarly. The output activation of the Prediction Network is a sigmoid whereas the output activation of the Bias Network is linear. We train for 15 epochs on minibatches of size 3072 and evaluate on our RUS dataset.

7.2. Main evaluations

We train on our FL dataset for 15 epochs, then test on the RUS and FL datasets as illustrated in Figure 6. We try varying levels of ’s and compare performance to the model with . , which represents position CTR is used as input into the position Bypass Network.

Figure 6 shows the AUC percent differences between each model with value and the model. This figure shows results for a region where produces high gains on RUS data, while keeping the error on the FL dataset low. The FL losses are acceptable in the Ads domain for higher RUS gains. We see as much as a 0.19 gain with low cost to FL (-0.03) over the model on the RUS data at . These results indicate that the ANN model is generalizing better to .

Figure 6. Absolute percentage difference from the model training on the FL data and testing on FL and RUS

8. Conclusion

In this work, we described an Adversarial Neural Network architecture that creates a PClick representation of data with controllable levels of invariance to confounding features. To show the efficacy of the ANN, we demonstrated evaluations on synthetic and real-world data consisting of as much as 500 million training samples. We believe to the best of our knowledge that the ANN model is the first of its kind to explicitly remove and model feedback loop bias simultaneously.

To do so, we define an adversarial Bias network that attempts to predict the confounding term, while the Base, Prediction, and Bypass networks attempt to model . A differentiable squared covariance loss function is used by the Prediction, Bypass, and Base networks to interfere with predictions from the Bias network. The Bypass network is still able to model the confounding feature linearly and separately. Our approach is advantageous to other previously proposed methods since it does not require time and revenue to generate online exploration data. Rather, it can be used at any point in the natural feedback loop present in an online Ads stack.


  • (1)
  • Abadi and Andersen (2016) Martín Abadi and David G. Andersen. 2016. Learning to Protect Communications with Adversarial Neural Cryptography. CoRR abs/1610.06918 (2016).
  • Bottou et al. (2013) Léon Bottou, Jonas Peters, Joaquin Qui nonero Candela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. 2013. Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising. Journal of Machine Learning Research 14 (2013), 3207–3260.
  • Chapelle and Li (2011) Olivier Chapelle and Lihong Li. 2011. An Empirical Evaluation of Thompson Sampling. In Advances in Neural Information Processing Systems 24, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 2249–2257.
  • Chapelle et al. (2014) Olivier Chapelle, Eren Manavoglu, and Romer Rosales. 2014. Simple and Scalable Response Prediction for Display Advertising. ACM Trans. Intell. Syst. Technol. 5, 4, Article 61 (Dec. 2014), 34 pages.
  • Craswell et al. (2008) Nick Craswell, Onno Zoeter, Michael Taylor, and Bill Ramsey. 2008. An Experimental Comparison of Click Position-bias Models. In Proceedings of the 2008 International Conference on Web Search and Data Mining (WSDM ’08). ACM, New York, NY, USA, 87–94.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 2672–2680.
  • Hillard et al. (2011) Dustin Hillard, Eren Manavoglu, Hema Raghavan, Chris Leggetter, Erick Cantú-Paz, and Rukmini Iyer. 2011. The Sum of Its Parts: Reducing Sparsity in Click Estimation with Query Segments. Inf. Retr. 14, 3 (June 2011), 315–336.
  • Li et al. (2010) Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A Contextual-bandit Approach to Personalized News Article Recommendation. In Proceedings of the 19th International Conference on World Wide Web (WWW ’10). ACM, New York, NY, USA, 661–670.
  • Louppe et al. (2016) G. Louppe, M. Kagan, and K. Cranmer. 2016. Learning to Pivot with Adversarial Networks. ArXiv e-prints (Nov. 2016). arXiv:stat.ML/1611.01046
  • McMahan et al. (2013) H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, and Jeremy Kubica. 2013. Ad Click Prediction: A View from the Trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’13). ACM, New York, NY, USA, 1222–1230.
  • Shimodaira (2000) Hidetoshi Shimodaira. 2000. Improving predictive inference under covariate shift by weighting the log-likelihood function. (2000).
  • Tang et al. (2014) Liang Tang, Yexi Jiang, Lei Li, and Tao Li. 2014. Ensemble Contextual Bandits for Personalized Recommendation. In Proceedings of the 8th ACM Conference on Recommender Systems (RecSys ’14). ACM, New York, NY, USA, 73–80.
  • Thompson (1933) William R. Thompson. 1933. On the Likelihood that One Unknown Probability Exceeds Another in View of the Evidence of Two Samples. Biometrika 25, 3/4 (1933), 285–294.