Robust Counterfactual Inferences using Feature Learning and their Applications

08/22/2018
by   Abhimanyu Mitra, et al.
WALMART LABS
0

In a wide variety of applications, including personalization, we want to measure the difference in outcome due to an intervention and thus have to deal with counterfactual inference. The feedback from a customer in any of these situations is only 'bandit feedback' - that is, a partial feedback based on whether we chose to intervene or not. Typically randomized experiments are carried out to understand whether an intervention is overall better than no intervention. Here we present a feature learning algorithm to learn from a randomized experiment where the intervention in consideration is most effective and where it is least effective rather than only focusing on the overall impact, thus adding a context to our learning mechanism and extract more information. From the randomized experiment, we learn the feature representations which divide the population into subpopulations where we observe statistically significant difference in average customer feedback between those who were subjected to the intervention and those who were not, with a level of significance l, where l is a configurable parameter in our model. We use this information to derive the value of the intervention in consideration for each instance in the population. With experiments, we show that using this additional learning, in future interventions, the context for each instance could be leveraged to decide whether to intervene or not.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/11/2020

Nonparametric bounds for causal effects in imperfect randomized experiments

Nonignorable missingness and noncompliance can occur even in well-design...
02/25/2020

Human Apprenticeship Learning via Kernel-based Inverse Reinforcement Learning

This paper considers if a reward function learned via inverse reinforcem...
12/08/2020

Unifying Online and Counterfactual Learning to Rank

Optimizing ranking systems based on user interactions is a well-studied ...
06/26/2019

Generalizing causal inferences from randomized trials: counterfactual and graphical identification

When engagement with a randomized trial is driven by factors that affect...
05/03/2018

Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback

Counterfactual learning from human bandit feedback describes a scenario ...
12/13/2019

General Finite Sample Inference for Experiments with Examples from Health Care

I exploit knowledge of the randomization process within an experiment to...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

One of the most common form of data related to a Web service is customer feedback available in some form of interaction log of customers, when they are interacting with the Web service. However, typically these interaction logs are only partial information, also known as “bandit feedback”, as it is contingent upon the prediction made by the system about what is the best way to present the service or which service to present to the customer; see Swaminathan and Joachims [2015]. For example, in personalization, the prediction is about adapting the content according to the customer, which most often turns out to be picking appropriate contents from a content pool based on customer features like the past browse and purchase activity of the customer. In many of the situations, we want to know how an alternate system of making predictions would have performed which brings us to the realm of counterfactual inference. For example, in personalization, an alternate system could be a different method of picking appropriate contents from the content pool based on customer features.

The problem of counterfactual inference has a rich literature with some the earlier works dating back to the 1970s and some of the latest appearing in the last few years; see for example Lewis [1973], Rubin [1974], Rosenbaum and Rubin [1983], Rubin [2005], Bang and Robins [2005], van der Laan and Petersen [2007], Hill [2011], Dudík et al. [2011], Austin [2011], Chernozhukov et al. [2013], Bottou et al. [2013], Swaminathan and Joachims [2015], Johansson et al. [2016]. However, most of this literature is focussed on observational studies rather than a controlled experiment and dealing with the challenge of eliminating selection bias from the inference. Randomized experiments are known to eliminate this selection bias, but conducting a randomized experiment is costly and it might be even impossible to conduct one such in certain observational studies. Some recent research has been devoted to clever experimental designs which could lead to reducing the cost of the randomized experiment; see Kohavi et al. [2009], Tang et al. [2010], Bottou et al. [2013], Johnson et al. [2017]. In our research, we focus on none of the above problems and accept as our base a randomized experiment framework, where we choose one system as the incumbent and make interventions in that system in randomly chosen situations by overwriting the incumbent system with the predictions of the new system. Thus our base framework already incurs the cost associated with a randomized experiment and therefore for us, a simple comparison of average customer feedbacks (click-through-rates, sales revenue per impression etc.) for the interventions with those where the incumbent system is not intervened, would yield which system is performing better, free of any selection bias. Therefore, we also do not need sophisticated measurement techniques for counterfactual inference as is required in the observational studies to eliminate selection bias. However, even if we do not have a selection bias, we still might get an inconclusive result due to the noise, when the difference in average customer feedback between those subjected to the intervention and those who were not, is not statistically significantly different from , with a level of significance . Note that the noise is a result of the difference in feedbacks from different instances, mostly due to different preferences as well as different inclinations to provide feedback (some customers are more likely to click than others irrespective of the level of satisfaction with the service). So, we ask the question: what if we group together some of the instances with very similar preferences and very similar inclinations to provide feedback, so that we could get a conclusive result for that group? While a conclusive result for a particular group might not necessarily mean a conclusive global result (it could still be inconclusive at the global level due to noise), we can utilize this information in future system design. In other words, having incurred the cost of a randomized experiment, we ask if we can extract more information from the experiment rather than a global comparison between the two systems? More specifically, we ask the question: can the context of an instance indicate in a robust way (robust against the inherent noise in customer feedback) which system is more suitable rather than picking a system to be used globally (the system which overall performs better)?

We propose a more personalized approach to learn a system’s performance. While one system, say system A, might be overall better than another system, say system B, there might be instances where it is better to predict using system B. The context of an instance might guide us in predicting whether the instance will prefer predictions from system A or predictions from system B. For example, in personalization, we may not yet have found a method of personalizing contents that, based on user features, decides appropriate contents from a content pool, and is universally better than all other methods of personalization. A more realistic scenario is where we have a pool of methods, each of which is best for some considerably large subpopulation, but none of them universally dominates all the others. In such a case, a personalization system which lets the methods divide and conquer, will perform much better than a personalization system which chooses only one of them. A personalization system that lets the individual personalization methods divide and conquer, will work in two layers, where in the first layer of personalization, based on user features (context), the system decides on a method and then, using the chosen method and user features, pick appropriate contents from a content pool for the user. The relative success of this two-layered prediction method when compared to picking the overall better-performing system to be used globally, depends on whether there is enough dissidence about the system preference among the instances. This property of the two-layered prediction is similar to personalization itself, which, compared to a global method of picking best content, works best when the content preferences are vastly different for different instances.

However, finding the set of instances which might prefer a different system than the rest is a combinatorial challenge as the number of subsets explode quickly. Also, unless the subsets are characterized by a function of the contexts of the constituent instances, we could not make the learning useful in future system design. Since the context is usually a feature vector of several dimensions (for example, in user-based personalization, a user’s past browsing and purchase history could become the context), characterizing the subsets of instances where the constituent instances prefer a system that is different than those outside the subset, in terms of the contexts of the constituent instances, is impossible to achieve by iteratively checking each possible function of the contexts of the instances to define each possible subset.

We propose a feature learning algorithm to learn which system makes how much better predictions for what instances compared to the other. As the base of our learning framework, we have a randomized experiment, where there is an incumbent system and interventions are made randomly to overwrite the predictions of the incumbent with the predictions of a new system. We note the “bandit feedback” of the customers for all predictions in the experiment, some with the intervention and some without. For example, the incumbent system could be our current method of personalization, where we choose appropriate content based on user features using the current method, and the intervention is a newly developed method of personalization, which chooses the appropriate contents from the content pool based on user features in a different way.

From the randomized experiment, we learn the feature representations which divide the population of instances into subpopulations where the difference in average customer feedback between those who were subjected to the intervention and those who were not, is statistically significantly different from , with a level of significance and

is a configurable parameter in our model. We use this information to derive the value of the intervention in consideration for each instance in the population based on its context, which we call derived personal valuation, depending on the membership of that particular instance in some subpopulation which exhibited statistically significant valuation, exclusivity of the subpopulation, the estimated average valuation for the subpopulation and its volatility. Note that even though in

Johansson et al. [2016], the authors used feature representations in the problem of counterfactual inference, our motivation for feature learning is completely different from them. In Johansson et al. [2016], the authors used feature learning to reduce the selection bias in observational studies, whereas in this paper, we start with a randomized experiment which already removes the selection bias and we use feature learning to deduce conclusive results (signal strong compared to noise) for relatively smaller groups, which might be very different from the result at the global level (which could potentially still be inconclusive).

In our above example with two different personalization methods, we infer that the users with positive derived personal valuations (note that the derived personal valuation depends only on the context of the user in the form of user features like past browse and purchase activity) prefer the new method of personalization more than the current one. If the derived personal valuation is negative for everyone, we can safely discard the new method, as the current method universally dominates the new methods. Similarly, if all derived personal valuations are positive, the new method universally dominates the current method and we can safely replace the old method with the new one. However, a more realistic scenario is where the derived personal valuations range from negative to positive values, suggesting that for some users, the current method is better than the new one, whereas for some other users, it is the other way round. In the last and more realistic scenario, we might benefit in keeping the both the methods and build another layer of personalization in the system where based on user features, the system first decides on a method and then using the chosen method and the user features, picks content from the content pool for the user.

Since our method of deriving personal valuation depends only on subpopulations where we have conclusive results (difference in average customer feedback is statistically significantly different from , with a level of significance ), our derivation of personal valuation at a specified context is more robust. In other words, our derived personal valuation is more free from the inherent noise in customer feedback. The literature on contextual bandit problems is dedicated to building robust estimators for each context (for example, see Li et al. [2010, 2011], Dudík et al. [2012]), but most of them are dedicated to the issue of imbalances in the observed data and the proposed solutions cleverly manage this imbalance. Since we start with a controlled experiment, such imbalances are not primary concern for us. However, our approach to derive robust personal valuations is focussed on the inherent noise in customer feedback and we attempt to make the estimator robust against this inherent noise. Thus our research is fundamentally different from the techniques used in the literature on the contextual bandit problem. To the best of our knowledge, no research has been devoted to the problem we addressed here.

With experiments, we show that the derived personal valuation for each instance could be leveraged in future to decide whether to intervene or not based on the features of the instance.

Figure 1 illustrates an example how the entire process would work. Suppose we have conducted a randomized experiment for 30 days with two versions of a webpage and collected customer clicks on the webpages. We want to understand which version of the webpage generates higher engagement or CTR (click-through-rate). We use, for example, the first 20 days for making context-based robust inferences. This is our set of training instances. We keep the last 10 days to evaluate how a derived personal valuation (DPV) based system design would perform. Note that this is how we will do a system design update based on DPV, where we learn from previous experiments (for example, the training instances) and use the learning to update system design that would impact future predictions (for example, the test instances). Suppose, no conclusion could be inferred at the global level using the training instances, in other words, CTR for version 1 is not statistically significantly different from CTR for version 2 with a level of significance, say 5%. However, using gender of the user as a context gives us more information. Suppose, we find that women, in general, usually like version 2 better more than version 1 (based on CTR from training instances) and men prefer version 1 over version 2 and both these conclusions could be made with a level of significance, say 5%. If this is our only conclusion, a derived personal valuation (DPV) based system design would suggest showing version 2 to women and version 1 to men. If women in the test instances indeed like version 2 more than version 1, and men in the test instances indeed like version 1 more than version 2, that would validate the fact that a DPV based system would work better than choosing either version1 or version 2 globally. In the second case where we choose one version globally, one of the group (either men or women) would be less engaged. Note that in making the inference that women like version 2 more, we have not used the test instances, but only the training instances. If we indeed used a DPV based system for the test instances, women would only see version 2 and we would not know how engaged they would be with version 1. Thus, for the validation of a DPV based system, it is necessary that the test instances are also part of the randomized experiment, so that we would have women seeing both version 1 and version 2 and would be able to compare the CTR difference without any selection bias and therefore, be able to understand how a DPV based system would perform. After this evaluation of the DPV-based system, if the DPV-based indeed works better, we would update our system using DPV, which in this example is showing version 2 to women and version 1 to men.

Figure 1. Application architecture design.

2. Mathematical framework

In this section, we provide the criterion for deciding whether a subpopulation exhibits a statistically significant valuation for the intervention in consideration, that is, the difference in average customer feedback is statistically significantly different from , for predictions made with the intervention when compared to the ones made without the intervention, with a specified level of significance and we configure the parameter in our model. Then we construct an optimization problem we need to solve in order to identify subpopulations more likely to pass the criterion. Without loss of generality, we assume each feature could only take a finite number of real values. For features not directly satisfying this assumption, we appropriately merge values or discretize to satisfy this assumption.

To provide a mathematical framework for the problem, let us first introduce some notation. Let be the -th instance (member of population) and be the value of the -th feature of the -th instance. In other words, provides the context for instance . Assume the total number of features used to represent context of an instance is and each instance assumes a value for each of the features, that is, a real value of is available for each instance and each feature . Let be the metric or customer feedback for the -th instance using which we are measuring valuation of the intervention in consideration, that is, if the intervention impacts positively, the metric is expected to increase and if the intervention generates negative impact, the metric is expected to decrease. For example, a metric could be the number of clicks per page view. Usually the metric is driven by business goals. We could potentially consider several metrics together making a vector, but in this paper, we will restrict to be only a scalar. We assume the metric for each instance are mutually independent (in the probabilistic sense). In future references, we will call the customer feedback we chose to compare the system performances as the metric.

We assume a standard randomized experiment is set-up for the entire population. Thus the population is randomly divided into two groups, a test group and a control group and while the test group is subjected to predictions with the intervention, the control group is subjected to predictions without the intervention. Also, let us denote the whole population of instances as . Let be the feature vector for the -th instance of the test group and be the feature vector of the -th instance of the control group. Similarly, let be the metric or customer feedback (clicks, revenue etc.) for the -th instance in the test group and be the metric or the customer feedback for the -th instance in the control group.

2.1. Subpopulation eligibility

First we establish a criterion for deciding whether a subpopulation shows a statistically significant impact of the intervention in consideration. Since in the randomized experiment set-up, the membership in the test or control group is decided randomly independent of the context of the instance, comparison of metrics for test and control groups restricted to any subpopulation based on , say the subpopulation where is a measurable function and , the measurement of the impact of the intervention in consideration for the subpopulation should also be free of any selection bias. In this paper, we restrict ourselves to linear feature representations and thus for us, , where is a matrix.

We would like to identify subpopulations where there is a statistically significant impact of the intervention in consideration, or in other words, we want to find subpopulations where the difference in the average metrics for the test and control group restricted to the subpopulation is statistically significantly different from , with a level of significance . Using notation, the condition translates to finding and such that

(2.1)

where and are the average metrics from the test and control group respectively restricted to the subpopulation , and

are the empirical variances of the metric in the test and control group respectively restricted to the subpopulation

, is the

-th quantile of the distribution of the quantity on the LHS of (

2.1

) under the null hypothesis that the average metric difference between test and control group is

(recall,

is the level of significance of the test, so that under the null hypothesis, the probability of (

2.1) being satisfied is exactly ) and the function equals the size of the set in its argument. Under the null hypothesis that the intervention has no impact, the distribution of the quantity on the LHS of (2.1

) could be approximated by the square of a variable following standard normal distribution.

2.2. Finding eligible subpopulations

We call a subpopulation eligible to be included in deriving personal valuation if it satisfies (2.1). Note that satisfying (2.1) is equivalent to rejecting a null hypothesis that the intervention has no impact, where the level of significance of the statistical test is . We begin by noting that a subpopulation may not be eligible according to (2.1) for one of the two reasons: either there is little or no impact of the intervention in consideration, or there is insufficient data to conclude anything, or both. These two reasons, in a way, complement each other. Note that if we have a lot of data, we could statistically significantly measure even tiny impacts and if we have a small amount of data, the impact needs to be huge in order for us to be able to measure that in a statistically significant way. On the other hand, if we have a huge impact of the intervention in consideration for the subpopulation, all we need is a very small amount of data to measure it in a statistically significant way, and if we have a tiny impact, we need huge amounts of data to do the same. Therefore, the reason for which a subpopulation is not eligible, whether it is the insufficiency of data or the relatively little impact of the intervention in consideration, is more of a subjective decision.

If we could find out a way to identify the subpopulations which has the highest impact without actually checking (2.1), we have satisfied our objective and we need not do anything more. However, no such method is known in its full generality and finding such a method seems to be a harder problem. Instead, we aim to identify subpopulations which have a lot of data, so that when we use (2.1) to measure the impact of intervention in consideration, even relatively smaller impacts could be measured in a statistically significant way.

We would focus on the first term on the LHS of (2.1), which is , and is a quantification of the amount of data for the subpopulation . This term could be re-written as , where is the fraction of the subpopulation in the test group. Thus this term is dependent on the subpopulation size , as well as the fractions of the subpopulation in test and control group, given by and respectively. We want find so as to maximize the first quantity on the LHS of (2.1) for all subpopulations created by , viz. , . Since achieving that for all subpopulations created by together might not be possible, we want to maximize the expected value of the quantity over all subpopulations created by . The expected value of the quantity over all subpopulations created by , where each subpopulation is weighted by its relative size simplifies to

(2.2)

where if C is true, and otherwise, are indicator functions indicating whether condition is true or not. Recall, the set denotes the population of instances. The proof of the equivalence in (2.2) is shown in Appendix A.

So, motivated by (2.2), we search for which maximizes the RHS of (2.2). To formulate this as an optimization problem, we define a matrix , whose columns are of the form , where , and . Thus is a huge matrix with dimensions equal to . Recall, is the total number of features describing instances in the population and and are the sizes of the test and control group respectively. Let be the -th column of . Our optimization problem to search for is formulated as

(2.3)

Here the variables act like slack variables, in the sense that if , then must be in order to satisfy the linear constraint in the optimization problem (2.3). If , the corresponding slack variable must assume the value in order to maximize the objective function of the optimization problem (2.3). Therefore, it is easy to see that if is a solution of the optimization problem (2.3), then and will satisfy the condition , which is equal to times the RHS of (2.2). So, it follows that obtained as a solution from the optimization problem (2.3) would also maximize the LHS of (2.2), which is exactly what we wanted.

Note that here, even if we restrict ourselves to linear feature representations to define the subpopulations, we do not aim to estimate a prediction function for based on , which is a major focus of methods trying to eliminate selection bias; see Rosenbaum and Rubin [1983], Johansson et al. [2016].

2.3. Reducing the search space of matrices

Our next goal is to reduce the search space of by eliminating some redundancies and imposing some structure on in the optimization problem (2.3). We note that we could demand the rows of to be orthonormal without changing the set of subpopulations we consider with the help of the following two propositions. The proofs follow from set equalities and are omitted here for space constraints.

Proposition 2.1.

The following two statements are true about the set of subpopulations generated by a matrix :

  1. If is not full row rank, the set of subpopulations generated by , viz. , could also be generated by a lower dimensional matrix with lesser number of rows.

  2. If is full row rank, the set of subpopulations generated by , viz. , could also be generated by a matrix with dimensions same as and whose rows are orthonormal.

2.4. Searching for multiple matrices

Our goal is to find as many subpopulations as possible, which satisfy (2.1), or in other words, find as many subpopulations as possible, where the null hypothesis of no effect of intervention is rejected in the statistical test with level of significance . The more such subpopulations we find, the more information we extract from the randomized experiment conducted. As discussed previously, the reason for satisfying (2.1) could be attributed to either the amount of data or the magnitude of the impact of the intervention inconsideration. In our optimization problem, we focussed on finding subpopulations with the most amount of data. Note that, while it is true that if the amount of data is very little, there is little chance for a subpopulation to satisfy (2.1), but with reasonable amount of data, some subpopulations could still satisfy (2.1) if the magnitude of the impact from the intervention is high enough. It is highly likely that the entire population has the most data, but the impact of the intervention in consideration has no statistically significant impact for the entire population does not preclude the possibility of the impact of the intervention in consideration being statistically significant for a subpopulation with a lot less data.

So, our search for feature representations does not end when we have a solution of optimization problem (2.3) and we keep looking for the next best one, which has less data than the previous one, but still could satisfy (2.1). Here we discuss when we have found a set of matrices and start searching for , what additional restrictions we can impose on the optimization problem (2.3) to search for the next best one. The following proposition suggests that the row space of must not be a subset of the row space of for . Let us denote the row space of by . Once again, the proof follows from set equalities and is omitted here for space constraints.

Proposition 2.2.

If for some , then the set of subpopulations generated by is the same as the set of subpopulations generated by .

In light of Proposition 2.2, we want to add the restriction that for , . The condition that could be re-written as the following condition: , where is the -th row of . The equivalence holds since is the projection matrix for and hence is an idempotent matrix. So, putting everything together, having found , to find the -th matrix , we solve the following optimization problem:

(2.4)

The parameter could be chosen as an appropriate tuning parameter in the algorithm, which solves the optimization problem (2.4). It is understood that when searching for the first matrix , that is , the fourth set of constraints in (2.4) will disappear.

Note that each run of the optimization problem gives us a feature representation and with increasing , the optimal value of the optimization problem (2.4) drops, indicating that expected quantity of data associated with (in the sense of (2.2)) is reducing as increases. We stop when exceeds a preset threshold or the optimal value drops below a preset threshold. Note that the drop in optimal value of with increasing is not of concern, because while the drop means there is less expected quantity of data from the subpopulations generated by (see (2.2)), ultimately we want to identify all subpopulations which satisfy (2.1) and not only be restricted to subpopulations generated by . The more subpopulations we find that satisfy (2.1), the more information we extract from our randomized experiment.

Note that in earlier discussion, we fixed the dimension of as , where is the total number of features defining a context . While we cannot change as it is given to us, we do have some flexibility in the choice of . Instead of fixing a particular , we could start from (dimension of is ) and then continue increasing , thus increasing the granularity of the subpopulations. Note that the required magnitude of impact in order to satisfy (2.1) goes up as a consequence, which in turn reduces the likelihood of condition (2.1) being satisfied for those subpopulations. Thus, it is advisable to keep the much lower compared to . By following this, in our final set of identified subpopulations that satisfy (2.1), some could be generated by -s of dimensions and some could be generated by -s of dimensions , where . Note that our only aim is to identify as many subpopulations as possible that satisfy (2.1) and we do not care whether they are characterized by matrices of the same dimensions or not.

However, there might be computational limitations as to how long we can prolong our search of matrices and after a while, the expected quantity of data (in the sense of (2.2) ) associated with an matrix will become very low. This, in turn, would result in the subpopulations generated by those matrices having lesser and lesser data, which means those subpopulations are more and more unlikely to satisfy (2.1). Thus, we stop when we reach our computational limit or when the expected quantity of data (in the sense of (2.2) ) associated with an is low. Even though with this stopping condition, we may have missed some subpopulation which could have satisfied (2.1), in the process, we have extracted a lot more information from the randomized experiment than just the comparison of the average metrics of the test and control group at the global level.

3. Algorithm to find subpopulations

To solve the optimization problem (2.4), we consider a Lagrangian relaxation of the problem given by

(3.1)

where is the -th row of and and are penalty constants; see Nocedal and Wright [2006]. We take a greedy approach and solve (3) by updating and in sequence. We choose for all , and at each update, we change the constant as , where in the last step is used for computation of the constant.

We update by gradient descent, where we move the slightly in the direction of the derivative of given in (3) w.r.t. . Also, leveraging Proposition 2.2, we could claim that it is good enough to only consider the update vector projected in the orthogonal space of the row space of current . So, finally, the update to the matrix would be: for a small ,

(3.2)

The next step is updating . To do that, first we compute . Then we update each in the following way: if the condition is satisfied for some value of , we set and otherwise, we set . The parameter could be tuned for the speed of convergence of the algorithm. See Algorithm 1 below for more execution details.

Now we select initializations of the variables. Note that the optimal value of the slack variable takes the value if and only if the features of the corresponding pair are equal in value once premultiplied by . We hope that for any initial choice of , the appropriate would be able to do that for every pair and choose for . For the initial choice of , we will perturb the last found solution a little as shown below:

(3.3)

Note that is the optimal solution for the optimization problem (2.4) with replaced by . So, satisfies all the constraints on except for the additional constraint imposed when is incremented by 1 in the optimization problem (2.4), that is, cannot belong to . So, we hope that the perturbation in (3.3) will satisfy all constraints on . For initialization of , start with a identity matrix appended by a zero matrix of dimensions .

1:procedure Search for ( Start with as in (3.3).)
2:Start with .
3:Compute and .
4:If , STOP.
5:If , update as in (3.2), else if -s are updated at least once, STOP, else try a different initial , say, by changing in (3.3).
6:Orthonormalize rows of following Gram-Schmidt algorithm.
7:Set if , otherwise, set .
8:Go back to step 3.
9:end procedure
Algorithm 1 Algorithm to select

4. Measuring personal valuation for each instance

The subpopulations we have identified in the previous section might overlap with each other and in this section, we focus on deriving the valuation of the impact for each instance. An instance might be part of several subpopulations which could potentially have different verdicts on the benefits of the intervention. Some subpopulation that the instance is part of (based on its context), may have a negative effect of the intervention, while some other subpopulation that it is part of, has a positive effect. Thus, given an instance with its context, we need to determine whether intervening with give us better feedback or not. This is what we focus on here.

We assume through the procedures described in the previous sections, we have found a set of subpopulations of the form which satisfy (2.1). We derive the valuation of an instance with context as follows:

(4.1)

where the weights are described below and is the average valuation for the subpopulation as found from the randomized experiment given by where and are the average metrics from the test and control group respectively restricted to the subpopulation . Also, note that if no subpopulation satisfies the condition in the sum on the RHS of (4.1), the value is the empty sum, which is .

Intuitively, the weights should have an inverse relationship with the volatility of the average metric , as higher volatility means less confidence in our estimate of the average valuation for the subpopulation . Also, the weights should penalize bigger subpopulations as they reduce the volatility of by adding more data and thus the individual valuations of its members (members of the subpopulation ) are not necessarily close to the average valuation of the subpopulation . Note that now we are only interested in deriving the valuation of the instance with context and not about the average valuation for a subpopulation that it belongs to. Including all these intuitions, we compute the weights by solving the following set of equations:

(4.2)

where is the volatility of . Note that the instance represented by its context vector plays a role in defining the weights through (4.1), where the summands are determined by .

Note that we could simplify the term that the weights given in (4.2) are inversely proportional to, as

(4.3)

In a controlled experiment as is our base set-up, we could assume the second term on the RHS of (4) to be close to a constant for any reasonably large subpopulation and subpopulations need to be reasonably large to be eligible according to (2.1). Thus, the weights are dominated by the first term on the RHS of (4), which means subpopulations where the metrics (customer feedbacks) are less volatile, will get higher weights than those with higher volatility, which conforms with the intuition that we trust those subpopulations more where the metrics are more consistent.

We potentially could derive personal valuation using methods in Johansson et al. [2016], even though it was not the primary objective of the paper. However, note that the authors in Johansson et al. [2016] have used representation learning in removing selection bias, whereas our basic set-up is a randomized experiment and therefore, we do not have any selection bias to begin with. Blindly applying methods of Johansson et al. [2016] on the results of a randomized experiment would result in unnecessary overfitting. Moreover, the methods described in Johansson et al. [2016] would require a known form of prediction function (in Johansson et al. [2016], the authors optimize within a family of prediction functions), whereas we proceed without any assumption on the prediction function and do not even need one for our modeling.

Note that our derivation of personal valuation is based on eligible subpopulations (see (2.1)) we found from the randomized experiment. Thus, in deriving the personal valuation at context , we automatically discard those subpopulations which contain

, but where the first order difference (difference in mean) do not rise above the second-order noise (standard deviation), making the derived personal valuation (DPV) at context

more robust. In the literature on contextual bandit problems (for example, see Li et al. [2010, 2011], Dudík et al. [2012]), research has been carried out to reduce bias and variance of estimators for each context , but most of them are dedicated to the issue of imbalance in the training data and the proposed solutions cleverly manage this imbalance. Since our base set-up is a controlled experiment, such imbalances are not primary concern for us. On the other hand, even with the controlled experiment, our conclusions are crippled by the inherent noise in the customer feedbacks, which we attempt to resolve. Thus our approach to derive robust personal valuations is fundamentally different from the techniques used in the literature on contextual bandit problem.

5. Applications

One of the applications is finding relevant target population for a similar future intervention. Given the features for an instance, we can compute derived personal valuations (DPV) even if the instance was not part of the randomized experiment used to identify eligible subpopulations using (2.1). We can assume that a subpopulation with relatively higher DPV will provide more incremental metrics in future interventions than a subpopulation with relatively lower DPV. For example, if the intervention is a new method of personalization as opposed to an existing one, we could show personalized content according to the new method for those who have the positive DPV for this intervention and keep running the current method for the rest. The scope of this application is well beyond personalization, for example, one could use this to target audience for an online ad campaign, where the learning framework is used on a previous similar ad campaign and the intervention is an ad, as opposed to not running any ad campaign at all. In this application, we can target populations with highest DPV to best utilize campaign costs on a receptive audience.

The second application is identifying scope for improvement in the intervention in consideration. The groups with low or negative DPV for a given intervention such as a new method for personalization represent the population for which intervention did not perform well. Thus improvement of the new method of personalization can focus on such groups. Alternatively future interventions can choose to exclude such groups to optimize benefits as suggested before.

5.1. Validation framework

For empirical validation of how a DPV-based system design would work, we propose the following validation framework. We run a randomized experiment where predictions from a system, say, system A, is used as intervention and predictions from another system, say system B, is used as the default option. Customer feedback is collected on all predictions from the experiment. In the validation framework, we divide the randomized experiment data into training and test data instances. We select first 80% of all instances in chronological order as training data and the remaining as test data. We use training data to identify eligible subpopulations which satisfy (2.1

) and then use them to compute the DPV for instances in the test data. For each metric, we divide test instances in multiple groups in the order of their DPV by categorizing based on quartiles of DPV, everyone below quartile 1 is one group (

Q1), everyone below median but above quartile 1 is another group (Q1- Q2) and so on… Now for each DPV based group in test instances, we note the difference in average metric from those subjected to the intervention and those who were not, and call that our average incremental metric for the group. If DPV derived for the test instances are actually indicative of how each system will perform compared to the other, we expect to see increasing average incremental metric with increasing DPV. Thus the groups with higher DPV would have more incremental metric than groups with lower DPV. Since the training and test group are separated in a chronological order, this is exactly how we could use DPV in system design, where we learn from our past experiments which system works better for which instances, and use the DPV of future instances to decide the best system for the instance thus optimizing overall performance. Recall that in deriving the DPV for the test instances, we only used the metrics from training instances and the context for test instances, but never used the metrics for the test instances. As shown in Table 1, in our empirical experiment, DPV for the test instances were indeed indicative of system preference of the test instances.

5.2. Results

For our empirical experiment, the intervention was Whole Page Personalization (personalizing different modules of the page together) as opposed to separately personalizing different modules of the webpage. We ran a randomized experiment for 22 days, where a randomly selected fraction of online users were part of the experiment. The users in the randomized experiment were randomly divided into test and control group. The test group was exposed to Whole Page personalization and the control group saw independently personalized modules on the same webpage. We considered click through rate in a particular module category for metric within the web session. The metric assumes value if there was no click for given page view of a module category. We considered several other metrics, where each metric corresponds to clicks restricted to one category of content. We used past site activities of the user in different categories as context/features which are used to characterize an user at the time of the webpage visit (an instance is a user at the time of the webpage visit in this experiment).

We used first 16 days of the randomized experiment as training data and last 6 days as the test data, as suggested in our validation framework. We identified subpopulations satisfying (2.1) from training data and used them to derive personal valuations for users in the test data. Note that the features are determined by user and time-dependent user activity-based features. So, even if the same user comes twice or more during the randomized experiment, they will be treated as different instances. For the experiment below, we fix the level of significance at 30% (recall that the level of significance is a configurable parameter in our model).

We present the results for two metrics on the test instances (Table 1): CTR restricted to Category A and Category B modules. For both Category A and B we see that the average CTR difference between test and control groups increased significantly as the DPV increased for the groups. This means that Whole Page Personalization impact increases with increased DPV for the test instances. We only report the results for those DPV-based groups for whom the difference in CTR (restricted to the category) between those receiving Whole Page Personalization and those who did not, was statistically significantly different from with level of significance 30%. Note that due to limitation of data in our test instances, all DPV-based groups in the test instances might not produce conclusive results about system preferences. However, wherever they do, they show DPV being indicative of system preferences.

From the training data (first 16 days of the experiment), the category A CTR (only clicks on content of category A is considered) difference between those who received Whole Page Personalization and those who did not, is -0.13% with standard deviation of 0.23%. So, note that category A CTR is not statistically significantly different for the two groups at a global level. So, in a standard set-up, we would conclude there is no difference in these two experiences in terms of category A CTR metric. However, we found smaller subpopulations with conclusive preferences (restricted to the subpopulations, difference in category A CTR between those who received Whole Page Personalization and those who did not, were statistically significantly different from ) and using those, we derived the personal valuations. As results in Table 1 illustrate, with category A CTR as the chosen metric, a DPV-based system design would extract much more information from the experiment.

From the training data (first 16 days of the experiment), the category B CTR (only clicks on content of category B is considered) difference between those who received Whole Page Personalization and those who did not, is 0.34% with standard deviation of 0.30%. So, note that category B CTR is statistically significantly different for the two groups at a global level. So, in a standard set-up, we would conclude Whole Page personalization is better in terms of category B CTR metric and impose that on everyone (assuming we are only interested in category B CTR). However, as results in Table 1 illustrate, we found a DPV-based group (Q1) which actually does not prefer Whole Page Personalization (as measured by category B CTR). In this case as well, we would benefit from a DPV based system design.

Difference Standard DPV-
DPV in deviation of based
based average difference in group
Metric groups CTR average CTR size
Category A CTR Q1 0.058 0.043 160
Category A CTR Q2-Q3 0.135 0.108 87
Category B CTR Q1 -0.032 0.030 156
Category B CTR Q1-Q2 0.079 0.047 163
Table 1. Table for metric - click through rate (CTR)

6. Conclusion

We have proposed a feature learning algorithm to identify optimal system for a given instance based on its context. We have shown that our learning could be leveraged to target populations for future interventions as well as personalize the choice of optimal systems. A framework that leverages such personal preference of optimal systems will generate prediction in a two-layered approach: first choose the preferred system and then make the appropriate prediction using the preferred system. Further research is needed to get clarity on how personalized the choice of systems can be, how many systems a framework can support etc. to build a framework that uses the personalized choice of systems at scale. While we proposed one greedy approach to solve the optimization problem (2.4), further research needs to explore other possibly better ways of solving the optimization problem. Further research could also drive down the computational time. It would also be fruitful to invest in research to estimate noise for DPV and intelligent use of it in system design.

7. Acknowledgement

We sincerely thank Shyam Rapaka, who actively contributed in the material presented in this paper during his tenure at Walmart Labs.

References

  • [1]
  • Austin [2011] Peter C. Austin. 2011. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behavioral Research 46, 3 (2011), 399–424.
  • Bang and Robins [2005] Heejung Bang and James M. Robins. 2005. Doubly robust estimation in missing data and causal inference models. Biometrics 61, 4 (2005), 962–973.
  • Bottou et al. [2013] Léon Bottou, Jonas Peters, Joaquin Quiñonero-Candela, Denis X. Charles, Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Y. Simard, and Ed Snelson. 2013. Counterfactual reasoning and learning systems: the example of computational advertising.

    The Journal of Machine Learning Research

    14, 1 (2013), 3207–3260.
  • Chernozhukov et al. [2013] Victor Chernozhukov, Iván Fernández-Val, and Blaise Melly. 2013. Inference on counterfactual Distributions. Econometrica 81, 6 (2013), 2205–2268.
  • Dudík et al. [2012] Miroslav Dudík, Dimitru Erhan, John Langford, and Lihong Li. 2012. Sample-efficient nonstationary policy evaluation for contextual bandits.

    Proceedings of Uncertainty in Artificial Intelligence (UAI)

    (2012), 247–254.
  • Dudík et al. [2011] Miroslav Dudík, John Langford, and Lihong Li. 2011. Doubly robust policy evaluation and learning. Proceedings of the 28th International Conference on International Conference on Machine Learning (2011), 1097–1104.
  • Hill [2011] Jennifer L. Hill. 2011. Bayesian Nonparametric Modeling for Causal Inference. Journal of Computational and Graphical Statistics 20, 1 (2011), 217–240.
  • Johansson et al. [2016] Fredrik D. Johansson, Uri Shalit, and David Sontag. 2016. Learning representations for counterfactual inference. Proceedings of the 33rd International Conference on Machine Learning 48 (2016), 3020–3029.
  • Johnson et al. [2017] Garrett A. Johnson, Randall A. Lewis, and Elmar I. Nubbemeyer. 2017. Ghost Ads: Improving the Economics of Measuring Online Ad Effectiveness. Journal of Marketing Research 54, 6 (2017), 867–884.
  • Kohavi et al. [2009] Ron Kohavi, Roger Longbotham, Dan Sommerfield, and Randal M. Henne. 2009. Controlled experiments on the web: survey and practical guide. Data Mining and Knowledge Discovery 18, 1 (2009), 140–181.
  • Lewis [1973] David Lewis. 1973. Causation. The journal of philosophy 70, 17 (1973), 556–567.
  • Li et al. [2010] Lihong Li, Wei Chu, John Langford, and Robert Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. Proceedings of the 19th international conference on World wide web (2010), 661–670.
  • Li et al. [2011] Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. 2011. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. Proceedings of the fourth ACM international conference on Web search and data mining (2011), 297–306.
  • Nocedal and Wright [2006] J. Nocedal and S. J. Wright. 2006. Numerical Optimization. Spinger, New York.
  • Rosenbaum and Rubin [1983] Paul Rosenbaum and Donald B. Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70, 1 (1983), 41–55.
  • Rubin [1974] Donald B. Rubin. 1974. Estimating causal effects of treatments in randomized and nonrandomized Studies. Journal of Educational Psychology 66, 5 (1974), 688–701.
  • Rubin [2005] Donald B. Rubin. 2005. Causal inference using potential outcomes: Design, Modeling, Decisions. Journal of American Statistical Association 100, 469 (2005), 322–331.
  • Swaminathan and Joachims [2015] Adith Swaminathan and Thorsten Joachims. 2015. Batch learning from logged bandit feedback through counterfactual risk minimization. The Journal of Machine Learning Research 16, 1 (2015), 1731–1755.
  • Tang et al. [2010] Diane Tang, Ashish Agarwal, Deirdre O’Brien, and Mike Meyer. 2010. Overlapping experiment infrastructure: more, better, faster experimentation. Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining (2010), 17–26.
  • van der Laan and Petersen [2007] Mark J. van der Laan and Maya L. Petersen. 2007. Causal effect models for realistic individualized treatment and intention to treat rules. International Journal of Biostatistics 3, 1 (2007).

Appendix A Derivation of objective function used in optimization problem formulation

The expected value of the amount of data over all subpopulations created by , where each subpopulation is weighted by its relative size simplifies to

(A.1)

where if C is true, and otherwise, are indicator functions indicating whether condition is true or not. Recall, the set denotes the population of instances.