1 Introduction
Statistical machine learning technologies in the real world are never without a purpose. Using their predictions, humans or machines make decisions whose circuitous consequences often violate the modeling assumptions that justified the system design in the first place.
Such contradictions appear very clearly in the case of the learning systems that power web scale applications such as search engines, ad placement engines, or recommandation systems. For instance, the placement of advertisement on the result pages of Internet search engines depend on the bids of advertisers and on scores computed by statistical machine learning systems. Because the scores affect the contents of the result pages proposed to the users, they directly influence the occurrence of clicks and the corresponding advertiser payments. They also have important indirect effects. Ad placement decisions impact the satisfaction of the users and therefore their willingness to frequent this web site in the future. They also impact the return on investment observed by the advertisers and therefore their future bids. Finally they change the nature of the data collected for training the statistical models in the future.
These complicated interactions are clarified by important theoretical works. Under simplified assumptions, mechanism design (Myerson, 1981) leads to an insightful account of the advertiser feedback loop (Varian, 2007; Edelman et al., 2007). Under simplified assumptions, multiarmed bandits theory (Robbins, 1952; Auer et al., 2002; Langford and Zhang, 2008)
(Sutton and Barto, 1998) describe the exploration/exploitation dilemma associated with the training feedback loop. However, none of these approaches gives a complete account of the complex interactions found in reallife systems.This work is motivated by a very practical observation: in the data collected during the operation of an ad placement engine, all these fundamental insights manifest themselves in the form of correlation/causation paradoxes. Using the ad placement example as a model of our problem class, we therefore argue that the language and the methods of causal inference provide flexible means to describe such complex machine learning systems and give sound answers to the practical questions
facing the designer of such a system. Is it useful to pass a new input signal to the statistical model? Is it worthwhile to collect and label a new training set? What about changing the loss function or the learning algorithm? In order to answer such questions and improve the operational performance of the learning system, one needs to unravel how the information produced by the statistical models traverses the web of causes and effects and eventually produces measurable performance metrics.
Readers with an interest in causal inference will find in this paper () a real world example demonstrating the value of causal inference for largescale machine learning applications, ()
causal inference techniques applicable to continuously valued variables with meaningful confidence intervals
, and ()quasistatic analysis techniques for estimating how small interventions affect certain causal equilibria
. Readers with an interest in reallife applications will find () a selection of practical counterfactual analysis techniques applicable to many reallife machine learning systems. Readers with an interest in computational advertising will find a principled framework that () explains how to soundly use machine learning techniques for ad placement, and () conceptually connects machine learning and auction theory in a compelling manner.The paper is organized as follows. Section 2 gives an overview of the advertisement placement problem which serves as our main example. In particular, we stress some of the difficulties encountered when one approaches such a problem without a principled perspective. Section 3 provides a condensed review of the essential concepts of causal modeling and inference. Section 4 centers on formulating and answering counterfactual questions such as “how would the system have performed during the data collection period if certain interventions had been carried out on the system ?” We describe importance sampling methods for counterfactual analysis, with clear conditions of validity and confidence intervals. Section 5 illustrates how the structure of the causal graph reveals opportunities to exploit prior information and vastly improve the confidence intervals. Section 6 describes how counterfactual analysis provides essential signals that can drive learning algorithms. Assume that we have identified interventions that would have caused the system to perform well during the data collection period. Which guarantee can we obtain on the performance of these same interventions in the future? Section 7 presents counterfactual differential techniques for the study of equlibria. Using data collected when the system is at equilibrium, we can estimate how a small intervention displaces the equilibrium. This provides an elegant and effective way to reason about longterm feedback effects. Various appendices complete the main text with information that we think more relevant to readers with specific backgrounds.
2 Causation Issues in Computational Advertising
After giving an overview of the advertisement placement problem, which serves as our main example, this section illustrates some of the difficulties that arise when one does not pay sufficient attention to the causal structure of the learning system.
2.1 Advertisement Placement
All Internet users are now familiar with the advertisement messages that adorn popular web pages. Advertisements are particularly effective on search engine result pages because users who are searching for something are good targets for advertisers who have something to offer. Several actors take part in this Internet advertisement game:

Advertisers create advertisement messages, and place bids that describe how much they are willing to pay to see their ads displayed or clicked.

Publishers provide attractive web services, such as, for instance, an Internet search engine. They display selected ads and expect to receive payments from the advertisers. The infrastructure to collect the advertiser bids and select ads is sometimes provided by an advertising network on behalf of its affiliated publishers. For the purposes of this work, we simply consider a publisher large enough to run its own infrastructure.

Users reveal information about their current interests, for instance, by entering a query in a search engine. They are offered web pages that contain a selection of ads (figure 1). Users sometimes click on an advertisement and are transported to a web site controlled by the advertiser where they can initiate some business.
A conventional bidding language is necessary to precisely define under which conditions an advertiser is willing to pay the bid amount. In the case of Internet search advertisement, each bid specifies (a) the advertisement message, (b) a set of keywords, (c) one of several possible matching criteria between the keywords and the user query, and (d) the maximal price the advertiser is willing to pay when a user clicks on the ad after entering a query that matches the keywords according to the specified criterion.
Whenever a user visits a publisher web page, an advertisement placement engine runs an auction in real time in order to select winning ads, determine where to display them in the page, and compute the prices charged to advertisers, should the user click on their ad. Since the placement engine is operated by the publisher, it is designed to further the interests of the publisher. Fortunately for everyone else, the publisher must balance short term interests, namely the immediate revenue brought by the ads displayed on each web page, and long term interests, namely the future revenues resulting from the continued satisfaction of both users and advertisers.
Auction theory explains how to design a mechanism that optimizes the revenue of the seller of a single object (Myerson, 1981; Milgrom, 2004) under various assumptions about the information available to the buyers regarding the intentions of the other buyers. In the case of the ad placement problem, the publisher runs multiple auctions and sells opportunities to receive a click. When nearly identical auctions occur thousand of times per second, it is tempting to consider that the advertisers have perfect information about each other. This assumption gives support to the popular generalized second price rankscore auction (Varian, 2007; Edelman et al., 2007):

Let represent the auction context information, such as the user query, the user profile, the date, the time, etc. The ad placement engine first determines all eligible ads and the corresponding bids on the basis of the auction context and of the matching criteria specified by the advertisers.

For each selected ad and each potential position on the web page, a statistical model outputs the estimate
of the probability that ad
displayed in position receives a user click. The rankscore then represents the purported value associated with placing ad at position . 
Let represent a possible ad layout, that is, a set of positions that can simultaneously be populated with ads, and let be the set of possible ad layouts, including of course the empty layout. The optimal layout and the corresponding ads are obtained by maximizing the total rankscore
(1) subject to reserve constraints
(2) and also subject to diverse policy constraints, such as, for instance, preventing the simultaneous display of multiple ads belonging to the same advertiser. Under mild assumptions, this discrete maximization problem is amenable to computationally efficient greedy algorithms (see appendix A.)

The advertiser payment associated with a user click is computed using the generalized second price (GSP) rule: the advertiser pays the smallest bid that it could have entered without changing the solution of the discrete maximization problem, all other bids remaining equal. In other words, the advertiser could not have manipulated its bid and obtained the same treatment for a better price.
Under the perfect information assumption, the analysis suggests that the publisher simply needs to find which reserve prices yield the best revenue per auction. However, the total revenue of the publisher also depends on the traffic experienced by its web site. Displaying an excessive number of irrelevant ads can train users to ignore the ads, and can also drive them to competing web sites. Advertisers can artificially raise the rankscores of irrelevant ads by temporarily increasing the bids. Indelicate advertisers can create deceiving advertisements that elicit many clicks but direct users to spam web sites. Experience shows that the continued satisfaction of the users is more important to the publisher than it is to the advertisers.
Therefore the generalized second price rankscore auction has evolved. Rankscores have been augmented with terms that quantify the user satisfaction or the ad relevance. Bids receive adaptive discounts in order to deal with situations where the perfect information assumption is unrealistic. These adjustments are driven by additional statistical models. The ad placement engine should therefore be viewed as a complex learning system interacting with both users and advertisers.
2.2 Controlled Experiments
The designer of such an ad placement engine faces the fundamental question of testing whether a proposed modification of the ad placement engine results in an improvement of the operational performance of the system.
The simplest way to answer such a question is to try the modification. The basic idea is to randomly split the users into treatment and control groups (Kohavi et al., 2008). Users from the control group see web pages generated using the unmodified system. Users of the treatment groups see web pages generated using alternate versions of the system. Monitoring various performance metrics for a couple months usually gives sufficient information to reliably decide which variant of the system delivers the most satisfactory performance.
Modifying an advertisement placement engine elicits reactions from both the users and the advertisers. Whereas it is easy to split users into treatment and control groups, splitting advertisers into treatment and control groups demands special attention because each auction involves multiple advertisers (Charles et al., 2012). Simultaneously controlling for both users and advertisers is probably impossible.
Controlled experiments also suffer from several drawbacks. They are expensive because they demand a complete implementation of the proposed modifications. They are slow because each experiment typically demands a couple months. Finally, although there are elegant ways to efficiently run overlapping controlled experiments on the same traffic (Tang et al., 2010), they are limited by the volume of traffic available for experimentation.
It is therefore difficult to rely on controlled experiments during the conception phase of potential improvements to the ad placement engine. It is similarly difficult to use controlled experiments to drive the training algorithms associated with click probability estimation models. Cheaper and faster statistical methods are needed to drive these essential aspects of the development of an ad placement engine. Unfortunately, interpreting cheap and fast data can be very deceiving.
2.3 Confounding Data
Assessing the consequence of an intervention using statistical data is generally challenging because it is often difficult to determine whether the observed effect is a simple consequence of the intervention or has other uncontrolled causes.
For instance, the empirical comparison of certain kidney stone treatments illustrates this difficulty (Charig et al., 1986). Table 1 reports the success rates observed on two groups of 350 patients treated with respectively open surgery (treatment A, with 78% success) and percutaneous nephrolithotomy (treatment B, with 83% success). Although treatment B seems more successful, it was more frequently prescribed to patients suffering from small kidney stones, a less serious condition. Did treatment B achieve a high success rate because of its intrinsic qualities or because it was preferentially applied to less severe cases? Further splitting the data according to the size of the kidney stones reverses the conclusion: treatment A now achieves the best success rate for both patients suffering from large kidney stones and patients suffering from small kidney stones. Such an inversion of the conclusion is called Simpson’s paradox (Simpson, 1951).
Overall 
Patients with small stones 
Patients with large stones 


Treatment A: Open surgery 
78% (273/350)  93% (81/87)  73% (192/263) 
Treatment B: Percutaneousnephrolithotomy 
83% (289/350)  87% (234/270)  69% (55/80) 
The stone size in this study is an example of a confounding variable, that is an uncontrolled variable whose consequences pollute the effect of the intervention. Doctors knew the size of the kidney stones, chose to treat the healthier patients with the least invasive treatment B, and therefore caused treatment B to appear more effective than it actually was. If we now decide to apply treatment B to all patients irrespective of the stone size, we break the causal path connecting the stone size to the outcome, we eliminate the illusion, and we will experience disappointing results.
When we suspect the existence of a confounding variable, we can split the contingency tables and reach improved conclusions. Unfortunately we cannot fully trust these conclusions unless we are certain to have taken into account all confounding variables. The real problem therefore comes from the confounding variables we do not know.
Randomized experiments arguably provide the only correct solution to this problem (see Stigler, 1992). The idea is to randomly chose whether the patient receives treatment A or treatment B. Because this random choice is independent from all the potential confounding variables, known and unknown, they cannot pollute the observed effect of the treatments (see also section 4.2). This is why controlled experiments in ad placement (section 2.2) randomly distribute users between treatment and control groups, and this is also why, in the case of an ad placement engine, we should be somehow concerned by the practical impossibility to randomly distribute both users and advertisers.
2.4 Confounding Data in Ad Placement
Let us return to the question of assessing the value of passing a new input signal to the ad placement engine click prediction model. Section 2.1 outlines a placement method where the click probability estimates depend on the ad and the position we consider, but do not depend on other ads displayed on the page. We now consider replacing this model by a new model that additionally uses the estimated click probability of the top mainline ad to estimate the click probability of the second mainline ad (figure 1). We would like to estimate the effect of such an intervention using existing statistical data.
We have collected ad placement data for Bing^{2}^{2}2http://bing.com search result pages served during three consecutive hours on a certain slice of traffic. Let and denote the click probability estimates computed by the existing model for respectively the top mainline ad and the second mainline ad. After excluding pages displaying fewer than two mainline ads, we form two groups of 2000 pages randomly picked among those satisfying the conditions for the first group and for the second group. Table 2 reports the click counts and frequencies observed on the second mainline ad in each group. Although the overall numbers show that users click more often on the second mainline ad when the top mainline ad has a high click probability estimate , this conclusion is reversed when we further split the data according to the click probability estimate of the second mainline ad.
Overall  low  high  

low  6.2% (124/2000)  5.1% (92/1823)  18.1% (32/176) 
high  7.5% (149/2000)  4.8% (71/1500)  15.6% (78/500) 
Despite superficial similarities, this example is considerably more difficult to interpret than the kidney stone example. The overall click counts show that the actual clickthrough rate of the second mainline ad is positively correlated with the click probability estimate on the top mainline ad. Does this mean that we can increase the total number of clicks by placing regular ads below frequently clicked ads?
Remember that the click probability estimates depend on the search query which itself depends on the user intention. The most likely explanation is that pages with a high are frequently associated with more commercial searches and therefore receive more ad clicks on all positions. The observed correlation occurs because the presence of a click and the magnitude of the click probability estimate have a common cause: the user intention. Meanwhile, the click probability estimate returned by the current model for the second mainline ad also depend on the query and therefore the user intention. Therefore, assuming that this dependence has comparable strength, and assuming that there are no other causal paths, splitting the counts according to the magnitude of factors out the effects of this common confounding cause. We then observe a negative correlation which now suggests that a frequently clicked top mainline ad has a negative impact on the clickthrough rate of the second mainline ad.
If this is correct, we would probably increase the accuracy of the click prediction model by switching to the new model. This would decrease the click probability estimates for ads placed in the second mainline position on commercial search pages. These ads are then less likely to clear the reserve and therefore more likely to be displayed in the less attractive sidebar. The net result is probably a loss of clicks and a loss of money despite the higher quality of the click probability model. Although we could tune the reserve prices to compensate this unfortunate effect, nothing in this data tells us where the performance of the ad placement engine will land. Furthermore, unknown confounding variables might completely reverse our conclusions.
Making sense out of such data is just too complex !
2.5 A Better Way
It should now be obvious that we need a more principled way to reason about the effect of potential interventions. We provide one such more principled approach using the causal inference machinery (section 3). The next step is then the identification of a class of questions that are sufficiently expressive to guide the designer of a complex learning system, and sufficiently simple to be answered using data collected in the past using adequate procedures (section 4).
A machine learning algorithm can then be viewed as an automated way to generate questions about the parameters of a statistical model, obtain the corresponding answers, and update the parameters accordingly (section 6). Learning algorithms derived in this manner are very flexible: human designers and machine learning algorithms can cooperate seamlessly because they rely on similar sources of information.
3 Modeling Causal Systems
When we point out a causal relationship between two events, we describe what we expect to happen to the event we call the effect, should an external operator manipulate the event we call the cause. Manipulability theories of causation (von Wright, 1971; Woodward, 2005) raise this commonsense insight to the status of a definition of the causal relation. Difficult adjustments are then needed to interpret statements involving causes that we can only observe through their effects, “because they love me,” or that are not easily manipulated, “because the earth is round.”
Modern statistical thinking makes a clear distinction between the statistical model and the world. The actual mechanisms underlying the data are considered unknown. The statistical models do not need to reproduce these mechanisms to emulate the observable data (Breiman, 2001). Better models are sometimes obtained by deliberately avoiding to reproduce the true mechanisms (Vapnik, 1982, section 8.6). We can approach the manipulability puzzle in the same spirit by viewing causation as a reasoning model (Bottou, 2011) rather than a property of the world. Causes and effects are simply the pieces of an abstract reasoning game. Causal statements that are not empirically testable acquire validity when they are used as intermediate steps when one reasons about manipulations or interventions amenable to experimental validation.
This section presents the rules of this reasoning game. We largely follow the framework proposed by Pearl (2009) because it gives a clear account of the connections between causal models and probabilistic models.
3.1 The Flow of Information
Figure 2 gives a deterministic description of the operation of the ad placement engine. Variable represents the user and his or her intention in an unspecified manner. The query and query context is then expressed as an unknown function of the and of a noise variable . Noise variables in this framework are best viewed as independent sources of randomness useful for modeling a nondeterministic causal dependency. We shall only mention them when they play a specific role in the discussion. The set of eligible ads and the corresponding bids are then derived from the query and the ad inventory supplied by the advertisers. Statistical models then compute a collection of scores such as the click probability estimates and the reserves introduced in section 2.1. The placement logic uses these scores to generate the “ad slate” , that is, the set of winning ads and their assigned positions. The corresponding click prices are computed. The set of user clicks is expressed as an unknown function of the ad slate and the user intent . Finally the revenue is expressed as another function of the clicks and the prices .
Such a system of equations is named structural equation model (wright1921). Each equation asserts a functional dependency between an effect, appearing on the left hand side of the equation, and its direct causes, appearing on the right hand side as arguments of the function. Some of these causal dependencies are unknown. Although we postulate that the effect can be expressed as some function of its direct causes, we do not know the form of this function. For instance, the designer of the ad placement engine knows functions to and because he has designed them. However, he does not know the functions and because whoever designed the user did not leave sufficient documentation.
Figure 3 represents the directed causal graph associated with the structural equation model. Each arrow connects a direct cause to its effect. The noise variables are omitted for simplicity. The structure of this graph reveals fundamental assumptions about our model. For instance, the user clicks do not directly depend on the scores or the prices because users do not have access to this information.
We hold as a principle that causation obeys the arrow of time: causes always precede their effects. Therefore the causal graph must be acyclic. Structural equation models then support two fundamental operations, namely simulation and intervention.

Simulation – Let us assume that we know both the exact form of all functional dependencies and the value of all exogenous variables, that is, the variables that never appear in the left hand side of an equation. We can compute the values of all the remaining variables by applying the equations in their natural time sequence.

Intervention – As long as the causal graph remains acyclic, we can construct derived structural equation models using arbitrary algebraic manipulations of the system of equations. For instance, we can clamp a variable to a constant value by rewriting the righthand side of the corresponding equation as the specified constant value.
The algebraic manipulation of the structural equation models provides a powerful language to describe interventions on a causal system. This is not a coincidence. Many aspects of the mathematical notation were invented to support causal inference in classical mechanics. However, we no longer have to interpret the variable values as physical quantities: the equations simply describe the flow of information in the causal model (Wiener, 1948).
3.2 The Isolation Assumption
Let us now turn our attention to the exogenous variables, that is, variables that never appear in the left hand side of an equation of the structural model. Leibniz’s principle of sufficient reason claims that there are no facts without causes. This suggests that the exogenous variables are the effects of a network of causes not expressed by the structural equation model. For instance, the user intent and the ad inventory in figure 3 have temporal correlations because both users and advertisers worry about their budgets when the end of the month approaches. Any structural equation model should then be understood in the context of a larger structural equation model potentially describing all things in existence.
Ads served on a particular page contribute to the continued satisfaction of both users and advertisers, and therefore have an effect on their willingness to use the services of the publisher in the future. The ad placement structural equation model shown in figure 2 only describes the causal dependencies for a single page and therefore cannot account for such effects. Consider however a very large structural equation model containing a copy of the pagelevel model for every web page ever served by the publisher. Figure 4 shows how we can thread the pagelevel models corresponding to pages served to the same user. Similarly we could model how advertisers track the performance and the cost of their advertisements and model how their satisfaction affects their future bids. The resulting causal graphs can be very complex. Part of this complexity results from timescale differences. Thousands of search pages are served in a second. Each page contributes a little to the continued satisfaction of one user and a few advertisers. The accumulation of these contributions produces measurable effects after a few weeks.
Many of the functional dependencies expressed by the structural equation model are left unspecified. Without direct knowledge of these functions, we must reason using statistical data. The most fundamental statistical data is collected from repeated trials that are assumed independent. When we consider the large structured equation model of everything, we can only have one large trial producing a single data point.^{3}^{3}3See also the discussion on reinforcement learning, section 3.5. It is therefore desirable to identify repeated patterns of identical equations that can be viewed as repeated independent trials. Therefore, when we study a structural equation model representing such a pattern, we need to make an additional assumption to expresses the idea that the oucome of one trial does not affect the other trials. We call such an assumption an isolation assumption by analogy with thermodynamics.^{4}^{4}4The concept of isolation is pervasive in physics. An isolated system in thermodynamics (Reichl, 1998, section 2.D) or a closed system in mechanics (Landau and Lifshitz, 1969, §5) evolves without exchanging mass or energy with its surroundings. Experimental trials involving systems that are assumed isolated may differ in their initial setup and therefore have different outcomes. Assuming isolation implies that the outcome of each trial cannot affect the other trials. This can be achieved by assuming that
the exogenous variables are independently drawn from an unknown but fixed joint probability distribution
. This assumption cuts the causation effects that could flow through the exogenous variables.The noise variables are also exogenous variables acting as independent source of randomness. The noise variables are useful to represent the conditional distribution using the equation . Therefore, we also assume joint independence between all the noise variables and any of the named exogenous variable.^{5}^{5}5Rather than letting two noise variables display measurable statistical dependencies because they share a common cause, we prefer to name the common cause and make the dependency explicit in the graph. For instance, in the case of the ad placement model shown in figure 2
, we assume that the joint distribution of the exogenous variables factorizes as
(3) 
Since an isolation assumption is only true up to a point, it should be expressed clearly and remain under constant scrutiny. We must therefore measure additional performance metrics that reveal how the isolation assumption holds. For instance, the ad placement structural equation model and the corresponding causal graph (figures 2 and 3) do not take user feedback or advertiser feedback into account. Measuring the revenue is not enough because we could easily generate revenue at the expense of the satisfaction of the users and advertisers. When we evaluate interventions under such an isolation assumption, we also need to measure a battery of additional quantities that act as proxies for the user and advertiser satisfaction. Noteworthy examples include ad relevance estimated by human judges, and advertiser surplus estimated from the auctions (Varian, 2009).
3.3 Markov Factorization
Conceptually, we can draw a sample of the exogenous variables using the distribution specified by the isolation assumption, and we can then generate values for all the remaining variables by simulating the structural equation model.
This process defines a generative probabilistic model representing the joint distribution of all variables in the structural equation model. The distribution readily factorizes as the product of the joint probability of the named exogenous variables, and, for each equation in the structural equation model, the conditional probability of the effect given its direct causes (Spirtes et al., 1993; Pearl, 2000). As illustrated by figures 6 and 6, this Markov factorization connects the structural equation model that describes causation, and the Bayesian network that describes the joint probability distribution followed by the variables under the isolation assumption.^{6}^{6}6Bayesian networks are directed graphs representing the Markov factorization of a joint probability distribution: the arrows no longer have a causal interpretation.
Structural equation models and Bayesian networks appear so intimately connected that it could be easy to forget the differences. The structural equation model is an algebraic object. As long as the causal graph remains acyclic, algebraic manipulations are interpreted as interventions on the causal system. The Bayesian network is a generative statistical model representing a class of joint probability distributions, and, as such, does not support algebraic manipulations. However, the symbolic representation of its Markov factorization is an algebraic object, essentially equivalent to the structural equation model.
3.4 Identification, Transportation, and Transfer Learning
Consider a causal system represented by a structural equation model with some unknown functional dependencies. Subject to the isolation assumption, data collected during the operation of this system follows the distribution described by the corresponding Markov factorization. Let us first assume that this data is sufficient to identify the joint distribution of the subset of variables we can observe. We can intervene on the system by clamping the value of some variables. This amounts to replacing the righthand side of the corresponding structural equations by constants. The joint distribution of the variables is then described by a new Markov factorization that shares many factors with the original Markov factorization. Which conditional probabilities associated with this new distribution can we express using only conditional probabilities identified during the observation of the original system? This is called the identifiability problem. More generally, we can consider arbitrarily complex manipulations of the structural equation model, and we can perform multiple experiments involving different manipulations of the causal system. Which conditional probabilities pertaining to one experiment can be expressed using only conditional probabilities identified during the observation of other experiments? This is called the transportability problem.
Pearl’s docalculus completely solves the identifiability problem and provides useful tools to address many instances of the transportability problem (see Pearl, 2012). Assuming that we know
the conditional probability distributions involving observed variables in the original structural equation model,
docalculus allows us to derive conditional distributions pertaining to the manipulated structural equation model.Unfortunately, we must further distinguish the conditional probabilities that we know (because we designed them) from those that we estimate from empirical data. This distinction is important because estimating the distribution of continuous or high cardinality variables is notoriously difficult. Furthermore, docalculus often combines the estimated probabilities in ways that amplify estimation errors. This happens when the manipulated structural equation model exercises the variables in ways that were rarely observed in the data collected from the original structural equation model.
3.5 Special Cases
Three special cases of causal models are particularly relevant to this work.

In the multiarmed bandit (Robbins, 1952), a userdefined policy function determines the distribution of action , and an unknown reward function determines the distribution of the outcome given the action (figure 9). In order to maximize the accumulated rewards, the player must construct policies that balance the exploration of the action space with the exploitation of the best action identified so far (Auer et al., 2002; Audibert et al., 2007; Seldin et al., 2012).

Both multiarmed bandit and contextual bandit are special case of reinforcement learning (Sutton and Barto, 1998)
. In essence, a Markov decision process is a sequence of contextual bandits where the context is no longer an exogenous variable but a state variable that depends on the previous states and actions (figure
9). Note that the policy function , the reward function , and the transition function are independent of time. All the time dependencies are expressed using the states .
These special cases have increasing generality. Many simple structural equation models can be reduced to a contextual bandit problem using appropriate definitions of the context , the action and the outcome . For instance, assuming that the prices are discrete, the ad placement structural equation model shown in figure 2 reduces to a contextual bandit problem with context , actions and reward . Similarly, given a sufficiently intricate definition of the state variables , all structural equation models with discrete variables can be reduced to a reinforcement learning problem. Such reductions lose the fine structure of the causal graph. We show in section 5 how this fine structure can in fact be leveraged to obtain more information from the same experiments.
Modern reinforcement learning algorithms (see Sutton and Barto, 1998) leverage the assumption that the policy function, the reward function, the transition function, and the distributions of the corresponding noise variables, are independent from time. This invariance property provides great benefits when the observed sequences of actions and rewards are long in comparison with the size of the state space. Only section 7 in this contribution presents methods that take advantage of such an invariance. The general question of leveraging arbitrary functional invariances in causal graphs is left for future work.
4 Counterfactual Analysis
We now return to the problem of formulating and answering questions about the value of proposed changes of a learning system. Assume for instance that we consider replacing the score computation model of an ad placement engine by an alternate model . We seek an answer to the conditional question:
“How will the system perform if we replace model by model ?”
Given sufficient time and sufficient resources, we can obtain the answer using a controlled experiment (section 2.2). However, instead of carrying out a new experiment, we would like to obtain an answer using data that we have already collected in the past.
“How would the system have performed if, when the data was collected, we had replaced model by model ?”
The answer of this counterfactual question is of course a counterfactual statement that describes the system performance subject to a condition that did not happen.
Counterfactual statements challenge ordinary logic because they depend on a condition that is known to be false. Although assertion is always true when assertion is false, we certainly do not mean for all counterfactual statements to be true. Lewis (1973) navigates this paradox using a modal logic in which a counterfactual statement describes the state of affairs in an alternate world that resembles ours except for the specified differences. Counterfactuals indeed offer many subtle ways to qualify such alternate worlds. For instance, we can easily describe isolation assumptions (section 3.2) in a counterfactual question:
“How would the system have performed if, when the data was collected, we had replaced model by model without incurring user or advertiser reactions?”
The fact that we could not have changed the model without incurring the user and advertiser reactions does not matter any more than the fact that we did not replace model by model in the first place. This does not prevent us from using counterfactual statements to reason about causes and effects. Counterfactual questions and statements provide a natural framework to express and share our conclusions.
The remaining text in this section explains how we can answer certain counterfactual questions using data collected in the past. More precisely, we seek to estimate performance metrics that can be expressed as expectations with respect to the distribution that would have been observed if the counterfactual conditions had been in force.^{7}^{7}7Although counterfactual expectations can be viewed as expectations of unitlevel counterfactuals (Pearl, 2009, definition 4), they elude the semantic subtleties of unitlevel counterfactuals and can be measured with randomized experiments (see section 4.2.)
4.1 Replaying Empirical Data
Figure 11
shows the causal graph associated with a simple image recognition system. The classifier takes an image
and produces a prospective class label . The loss measures the penalty associated with recognizing class while the true class is .To estimate the expected error of such a classifier, we collect a representative data set composed of labeled images, run the classifier on each image, and average the resulting losses. In other words, we replay the data set to estimate what (counterfactual) performance would have been observed if we had used a different classifier. We can then select in retrospect the classifier that would have worked the best and hope that it will keep working well. This is the counterfactual viewpoint on empirical risk minimization (Vapnik, 1982).
Replaying the data set works because both the alternate classifier and the loss function are known. More generally, to estimate a counterfactual by replaying a data set, we need to know all the functional dependencies associated with all causal paths connecting the intervention point to the measurement point. This is obviously not always the case.
4.2 Reweighting Randomized Trials
Figure 11 illustrates the randomized experiment suggested in section 2.3. The patients are randomly split into two equally sized groups receiving respectively treatments and . The overall success rate for this experiment is therefore where and are the success rates observed for each group. We would like to estimate which (counterfactual) overall success rate would have been observed if we had selected treatment with probability and treatment with probability .
Since we do not know how the outcome depends on the treatment and the patient condition, we cannot compute which outcome would have been obtained if we had treated patient with a different treatment . Therefore we cannot answer this question by replaying the data as we did in section 4.1.
However, observing different success rates and for the treatment groups reveals an empirical correlation between the treatment and the outcome . Since the only cause of the treatment is an independent roll of the dices, this correlation cannot result from any known or unknown confounding common cause.^{8}^{8}8See also the discussion of Reichenbach’s common cause principle and of its limitations in (Spirtes et al., 1993; Spirtes and Scheines, 2004). Having eliminated this possibility, we can reweight the observed outcomes and compute the estimate .
4.3 Markov Factor Replacement
The reweighting approach can in fact be applied under much less stringent conditions. Let us return to the ad placement problem to illustrate this point.
The average number of ad clicks per page is often called click yield. Increasing the click yield usually benefits both the advertiser and the publisher, whereas increasing the revenue per page often benefits the publisher at the expense of the advertiser. Click yield is therefore a very useful metric when we reason with an isolation assumption that ignores the advertiser reactions to pricing changes.
Let be a shorthand for all variables appearing in the Markov factorization of the ad placement structural equation model,
(4)  
Variable was defined in section 3.1 as the set of user clicks. In the rest of the document, we slightly abuse this notation by using the same letter to represent the number of clicks. We also write the expectation using the integral notation
We would like to estimate what the expected click yield would have been if we had used a different scoring function (figure 12). This intervention amounts to replacing the actual factor by a counterfactual factor in the Markov factorization.
(5)  
Let us assume, for simplicity, that the actual factor is nonzero everywhere. We can then estimate the counterfactual expected click yield using the transformation
(6) 
where the data set of tuples is distributed according to the actual Markov factorization instead of the counterfactual Markov factorization. This data could therefore have been collected during the normal operation of the ad placement system. Each sample is reweighted to reflect its probability of occurrence under the counterfactual conditions.
In general, we can use importance sampling to estimate the counterfactual expectation of any quantity :
(7) 
with weights
(8) 
Equation (8) emphasizes the simplifications resulting from the algebraic similarities of the actual and counterfactual Markov factorizations. Because of these simplifications, the evaluation of the weights only requires the knowledge of the few factors that differ between and . Each data sample needs to provide the value of and the values of all variables needed to evaluate the factors that do not cancel in the ratio (8).
In contrast, the replaying approach (section 4.1) demands the knowledge of all factors of connecting the point of intervention to the point of measurement . On the other hand, it does not require the knowledge of factors appearing only in .
Importance sampling relies on the assumption that all the factors appearing in the denominator of the reweighting ratio (8) are nonzero whenever the factors appearing in the numerator are nonzero. Since these factors represents conditional probabilities resulting from the effect of an independent noise variable in the structural equation model, this assumption means that the data must be collected with an experiment involving active randomization. We must therefore design costeffective randomized experiments that yield enough information to estimate many interesting counterfactual expectations with sufficient accuracy. This problem cannot be solved without answering the confidence interval question: given data collected with a certain level of randomization, with which accuracy can we estimate a given counterfactual expectation?
4.4 Confidence Intervals
At first sight, we can invoke the law of large numbers and write
(9) 
For sufficiently large
, the central limit theorem provides confidence intervals whose width grows with the standard deviation of the product
.Unfortunately, when is small, the reweighting ratio
takes large values with low probability. This heavy tailed distribution has annoying consequences because the variance of the integrand could be very high or infinite. When the variance is infinite, the central limit theorem does not hold. When the variance is merely very large, the central limit convergence might occur too slowly to justify such confidence intervals. Importance sampling works best when the actual distribution and the counterfactual distribution overlap.
When the counterfactual distribution has significant mass in domains where the actual distribution is small, the few samples available in these domains receive very high weights. Their noisy contribution dominates the reweighted estimate (9). We can obtain better confidence intervals by eliminating these few samples drawn in poorly explored domains. The resulting bias can be bounded using prior knowledge, for instance with an assumption about the range of values taken by ,
(10) 
Let us choose the maximum weight value deemed acceptable for the weights. We have obtained very consistent results in practice with equal to the fifth largest reweighting ratio observed on the empirical data.^{9}^{9}9This is in fact a slight abuse because the theory calls for choosing before seing the data. We can then rely on clipped weights to eliminate the contribution of the poorly explored domains,
The condition ensures that the ratio has a nonzero denominator and is smaller than . Let be the set of all values of associated with acceptable ratios:
We can decompose in two terms:
(11) 
The first term of this decomposition is the clipped expectation . Estimating the clipped expectation is much easier than estimating from (9) because the clipped weights are bounded by .
(12) 
The second term of equation (11) can be bounded by leveraging assumption (10). The resulting bound can then be conveniently estimated using only the clipped weights.
(13) 
Since the clipped weights are bounded, the estimation errors associated with (12) and (13) are well characterized using either the central limit theorem or using empirical Bernstein bounds (see appendix B for details). Therefore we can derive an outer confidence interval of the form
(14) 
and an inner confidence interval of the form
(15) 
The names inner and outer are in fact related to our prefered way to visualize these intervals (e.g., figure 13). Since the bounds on can be written as
(16) 
we can derive our final confidence interval,
(17) 
In conclusion, replacing the unbiased importance sampling estimator (9) by the clipped importance sampling estimator (12) with a suitable choice of leads to improved confidence intervals. Furthermore, since the derivation of these confidence intervals does not rely on the assumption that is nonzero everywhere, the clipped importance sampling estimator remains valid when the distribution has a limited support. This relaxes the main restriction associated with importance sampling.
4.5 Interpreting the Confidence Intervals
The estimation of the counterfactual expectation can be inaccurate because the sample size is insufficient or because the sampling distribution does not sufficiently explore the counterfactual conditions of interest.
By construction, the clipped expectation ignores the domains poorly explored by the sampling distribution . The difference then reflects the inaccuracy resulting from a lack of exploration. Therefore, assuming that the bound has been chosen competently, the relative sizes of the outer and inner confidence intervals provide precious cues to determine whether we can continue collecting data using the same experimental setup or should adjust the data collection experiment in order to obtain a better coverage.

The inner confidence interval (15) witnesses the uncertainty associated with the domain insufficiently explored by the actual distribution. A large inner confidence interval suggests that the most practical way to improve the estimate is to adjust the data collection experiment in order to obtain a better coverage of the counterfactual conditions of interest.

The outer confidence interval (14) represents the uncertainty that results from the limited sample size. A large outer confidence interval indicates that the sample is too small. To improve the result, we simply need to continue collecting data using the same experimental setup.
4.6 Experimenting with Mainline Reserves
We return to the ad placement problem to illustrate the reweighting approach and the interpretation of the confidence intervals. Manipulating the reserves associated with the mainline positions (figure 1) controls which ads are prominently displayed in the mainline or displaced into the sidebar.
We seek in this section to answer counterfactual questions of the form:
“How would the ad placement system have performed if we had scaled the mainline reserves by a constant factor , without incurring user or advertiser reactions?”
Randomization was introduced using a modified version of the ad placement engine. Before determining the ad layout (see section 2.1), a random number
is drawn according to the standard normal distribution
, and all the mainline reserves are multiplied by. Such multipliers follow a lognormal distribution
^{10}^{10}10More precisely, with . whose mean is and whose width is controlled by . This effectively provides a parametrization of the conditional score distribution (see figure 6.)The Bing search platform offers many ways to select traffic for controlled experiments (section 2.2). In order to match our isolation assumption, individual page views were randomly assigned to traffic buckets without regard to the user identity. The main treatment bucket was processed with mainline reserves randomized by a multiplier drawn as explained above with and . With these parameters, the mean multiplier is exactly 1, and of the multipliers are in range . Samples describing 22 million search result pages were collected during five consecutive weeks.
We then use this data to estimate what would have been measured if the mainline reserve multipliers had been drawn according to a distribution determined by parameters and . This is achieved by reweighting each sample with
where is the multiplier drawn for this sample during the data collection experiment, and is the density of the lognormal multiplier distribution.
Figure 13 reports results obtained by varying while keeping . This amounts to estimating what would have been measured if all mainline reserves had been multiplied by while keeping the same randomization. The curves bound 95% confidence intervals on the variations of the average number of mainline ads displayed per page, the average number of ad clicks per page, and the average revenue per page, as functions of . The inner confidence intervals, represented by the filled areas, grow sharply when leaves the range explored during the data collection experiment. The average revenue per page has more variance because a few very competitive queries command high prices.
In order to validate the accuracy of these counterfactual estimates, a second traffic bucket of equal size was configured with mainline reserves reduced by about . The hollow circles in figure 13 represent the metrics effectively measured on this bucket during the same time period. The effective measurements and the counterfactual estimates match with high accuracy.
Finally, in order to measure the cost of the randomization, we also ran the unmodified ad placement system on a control bucket. The brown filled circles in figure 13 represent the metrics effectively measured on the control bucket during the same time period. The randomization caused a small but statistically significant increase of the number of mainline ads per page. The click yield and average revenue differences are not significant.
This experiment shows that we can obtain accurate counterfactual estimates with affordable randomization strategies. However, this nice conclusion does not capture the true practical value of the counterfactual estimation approach.
4.7 More on Mainline Reserves
The main benefit of the counterfactual estimation approach is the ability to use the same data to answer a broad range of counterfactual questions. Here are a few examples of counterfactual questions that can be answered using data collected using the simple mainline reserve randomization scheme described in the previous section:

Different variances – Instead of estimating what would have been measured if we had increased the mainline reserves without changing the randomization variance, that is, letting , we can use the same data to estimate what would have been measured if we had also changed . This provides the means to determine which level of randomization we can afford in future experiments.

Pointwise estimates – We often want to estimate what would have been measured if we had set the mainline reserves to a specific value without randomization. Although computing estimates for small values of often works well enough, very small values lead to large confidence intervals.
Let represent the expectation we would have observed if the multipliers had mean and variance . We have then . Assuming that the pointwise value is smooth enough for a second order development,
Although the reweighting method cannot estimate the pointwise value directly, we can use the reweighting method to estimate both and with acceptable confidence intervals and write (Goodwin, 2011).

Querydependent reserves – Compare for instance the queries “car insurance” and “common cause principle” in a web search engine. Since the advertising potential of a search varies considerably with the query, it makes sense to investigate various ways to define querydependent reserves (Charles and Chickering, 2012).
The data collected using the simple mainline reserve randomization can also be used to estimate what would have been measured if we had increased all the mainline reserves by a querydependent multiplier . This is simply achieved by reweighting each sample with
Considerably broader ranges of counterfactual questions can be answered when data is collected using randomization schemes that explore more dimensions. For instance, in the case of the ad placement problem, we could apply an independent random multiplier for each score instead of applying a single random multiplier to the mainline reserves only. However, the more dimensions we randomize, the more data needs to be collected to effectively explore all these dimensions. Fortunately, as discussed in section 5, the structure of the causal graph reveals many ways to leverage a priori information and improve the confidence intervals.
4.8 Related Work
Importance sampling is widely used to deal with covariate shifts (Shimodaira, 2000; Sugiyama et al., 2007). Since manipulating the causal graph changes the data distribution, such an intervention can be viewed as a covariate shift amenable to importance sampling. Importance sampling techniques have also been proposed without causal interpretation for many of the problems that we view as causal inference problems. In particular, the work presented in this section is closely related to the MonteCarlo approach of reinforcement learning (Sutton and Barto, 1998, chapter 5) and to the offline evaluation of contextual bandit policies (Li et al., 2010, 2011).
Reinforcement learning research traditionally focuses on control problems with relatively small discrete state spaces and long sequences of observations. This focus reduces the need for characterizing exploration with tight confidence intervals. For instance, Sutton and Barto suggest to normalize the importance sampling estimator by instead of . This would give erroneous results when the data collection distribution leaves parts of the state space poorly explored. Contextual bandits are traditionally formulated with a finite set of discrete actions. For instance, Li’s (2011) unbiased policy evaluation assumes that the data collection policy always selects an arbitrary policy with probability greater than some small constant. This is not possible when the action space is infinite.
Such assumptions on the data collection distribution are often impractical. For instance, certain ad placement policies are not worth exploring because they cannot be implemented efficiently or are known to elicit fraudulent behaviors. There are many practical situations in which one is only interested in limited aspects of the ad placement policy involving continuous parameters such as click prices or reserves. Discretizing such parameters eliminates useful a priori knowledge: for instance, if we slightly increase a reserve, we can reasonable believe that we are going to show slightly less ads.
Instead of making assumptions on the data collection distribution, we construct a biased estimator (12) and bound its bias. We then interpret the inner and outer confidence intervals as resulting from a lack of exploration or an insufficient sample size.
Finally, the causal framework allows us to easily formulate counterfactual questions that pertain to the practical ad placement problem and yet differ considerably in complexity and exploration requirements. We can address specific problems identified by the engineers without incurring the risks associated with a complete redesign of the system. Each of these incremental steps helps demonstrating the soundness of the approach.
5 Structure
This section shows how the structure of the causal graph reveals many ways to leverage a priori knowledge and improve the accuracy of our counterfactual estimates. Displacing the reweighting point (section 5.1) improves the inner confidence interval and therefore reduce the need for exploration. Using a prediction function (section 5.2) essentially improve the outer confidence interval and therefore reduce the sample size requirements.
5.1 Better Reweighting Variables
Many search result pages come without eligible ads. We then know with certainty that such pages will have zero mainline ads, receive zero clicks, and generate zero revenue. This is true for the randomly selected value of the reserve, and this would have been true for any other value of the reserve. We can exploit this knowledge by pretending that the reserve was drawn from the counterfactual distribution instead of the actual distribution . The ratio is therefore forced to the unity. This does not change the estimate but reduces the size of the inner confidence interval. The results of figure 13 were in fact helped by this little optimization.
There are in fact many circumstances in which the observed outcome would have been the same for other values of the randomized variables. This prior knowledge is in fact encoded in the structure of the causal graph and can be exploited in a more systematic manner. For instance, we know that users make click decisions without knowing which scores were computed by the ad placement engine, and without knowing the prices charged to advertisers. The ad placement causal graph encodes this knowledge by showing the clicks
as direct effects of the user intent and the ad slate . This implies that the exact value of the scores does not matter to the clicks as long as the ad slate remains the same.Because the causal graph has this special structure, we can simplify both the actual and counterfactual Markov factorizations (4) (5) without eliminating the variable whose expectation is sought. Successively eliminating variables , , and gives:
The conditional distributions and did not originally appear in the Markov factorization. They are defined by marginalization as a consequence of the elimination of the variable representing the scores.
We can estimate the counterfactual click yield using these simplified factorizations:
(18)  
We have reproduced the experiments described in section 4.6 with the counterfactual estimate (18) instead of (6). For each example , we determine which range of mainline reserve multipliers could have produced the observed ad slate , and then compute the reweighting ratio using the formula:
where is the cumulative of the lognormal multiplier distribution. Figure 14 shows counterfactual estimates obtained using the same data as figure 13. The obvious improvement of the inner confidence intervals significantly extends the range of mainline reserve multipliers for which we can compute accurate counterfactual expectations using this same data.
Comparing (6) and (18) makes the difference very clear: instead of computing the ratio of the probabilities of the observed scores under the counterfactual and actual distributions, we compute the ratio of the probabilities of the observed ad slates under the counterfactual and actual distributions. As illustrated by figure 15, we now distinguish the reweighting variable (or variables) from the intervention. In general, the corresponding manipulation of the Markov factorization consists of marginalizing out all the variables that appear on the causal paths connecting the point of intervention to the reweighting variables and factoring all the independent terms out of the integral. This simplification works whenever the reweighting variables intercept all the causal paths connecting the point of intervention to the measurement variable. In order to compute the new reweighting ratios, all the factors remaining inside the integral, that is, all the factors appearing on the causal paths connecting the point of intervention to the reweighting variables, have to be known.
Figure 14 does not report the average revenue per page because the revenue also depends on the scores through the click prices . This causal path is not intercepted by the ad slate variable alone. However, we can introduce a new variable that filters out the click prices computed for ads that did not receive a click. Markedly improved revenue estimates are then obtained by reweighting according to the joint variable .
Figure 16 illustrates the same approach applied to the simultaneous randomization of all the scores using independent lognormal multipliers. The weight is the ratio of the probabilities of the observed ad slate
under the counterfactual and actual multiplier distributions. Computing these probabilities amounts to integrating a multivariate Gaussian distribution
(Genz, 1992). Details will be provided in a forthcoming publication.5.2 Variance Reduction with Predictors
Although we do not know exactly how the variable of interest depends on the measurable variables and are affected by interventions on the causal graph, we may have strong a priori knowledge about this dependency. For instance, if we augment the slate with an ad that usually receives a lot of clicks, we can expect an increase of the number of clicks.
Let the invariant variables be all observed variables that are not direct or indirect effects of variables affected by the intervention under consideration. This definition implies that the distribution of the invariant variables is not affected by the intervention. Therefore the values of the invariant variables sampled during the actual experiment are also representative of the distribution of the invariant variables under the counterfactual conditions.
We can leverage a priori knowledge to construct a predictor of the quantity whose counterfactual expectation is sought. We assume that the predictor depends only on the invariant variables or on variables that depend on the invariant variables through known functional dependencies. Given sampled values of the invariant variables, we can replay both the original and manipulated structural equation model as explained in section 4.1 and obtain samples and that respectively follow the actual and counterfactual distributions
Then, regardless of the quality of the predictor,
(19)  
The first term in this sum represents the counterfactual expectation of the predictor and can be accurately estimated by averaging the simulated counterfactual samples without resorting to potentially large importance weights. The second term in this sum represents the counterfactual expectation of the residuals and must be estimated using importance sampling. Since the magnitude of the residuals is hopefully smaller than that of , the variance of is reduced and the importance sampling estimator of the second term has improved confidence intervals. The more accurate the predictor , the more effective this variance reduction strategy.
This variance reduction technique is in fact identical to the doubly robust contextual bandit evaluation technique of Dudík et al. (2012). Doubly robust variance reduction has also been extensively used for causal inference applied to biostatistics (see Robins et al., 2000; Bang and Robins, 2005). We subjectively find that viewing the predictor as a component of the causal graph (figure 17) clarifies how a well designed predictor can leverage prior knowledge. For instance, in order to estimate the counterfactual performance of the ad placement system, we can easily use a predictor that runs the ad auction and simulate the user clicks using a click probability model trained offline.
5.3 Invariant Predictors
In order to evaluate which of two interventions is most likely to improve the system, the designer of a learning system often seeks to estimate a counterfactual difference, that is, the difference of the expectations of a same quantity under two different counterfactual distributions and . These expectations are often affected by variables whose value is left unchanged by the interventions under consideration. For instance, seasonal effects can have very large effects on the number of ad clicks (figure 18) but affect and in similar ways.
Substantially better confidence intervals on the difference can be obtained using an invariant predictor, that is, a predictor function that depends only on invariant variables such as the time of the day. Since the invariant predictor is not affected by the interventions under consideration,
(20) 
Therefore
This direct estimate of the counterfactual difference benefits from the same variance reduction effect as (19) without need to estimate the expectations (20). Appendix C provide details on the computation of confidence intervals for estimators of the counterfactual differences. Appendix D shows how the same approach can be used to compute counterfactual derivatives that describe the response of the system to very small interventions.
6 Learning
The previous sections deal with the identification and the measurement of interpretable signals that can justify the actions of human decision makers. These same signals can also justify the actions of machine learning algorithms. This section explains why optimizing a counterfactual estimate is a sound learning procedure.
6.1 A Learning Principle
We consider in this section interventions that depend on a parameter . For instance, we might want to know what the performance of the ad placement engine would have been if we had used different values for the parameter of the click scoring model. Let denote the counterfactual Markov factorization associated with this intervention. Let be the counterfactual expectation of under distribution . Figure 19 illustrates our simple learning setup. Training data is collected from a single experiment associated with an initial parameter value chosen using prior knowledge acquired in an unspecified manner. A preferred parameter value is then determined using the training data and loaded into the system. The goal is of course to observe a good performance on data collected during a test period that takes place after the switching point.
The isolation assumption introduced in section 3.2 states that the exogenous variables are drawn from an unknown but fixed joint probability distribution. This distribution induces a joint distribution on all the variables appearing in the structural equation model associated with the parameter . Therefore, if the isolation assumption remains valid during the test period, the test data follows the same distribution that would have been observed during the training data collection period if the system had been using parameter all along.
We can therefore formulate this problem as the optimization of the expectation of the reward with respect to the distribution
(21) 
on the basis of a finite set of training examples sampled from .
However, it would be unwise to maximize the estimates obtained using approximation (7) because they could reach a maximum for a value of that is poorly explored by the actual distribution. As explained in section 4.5, the gap between the upper and lower bound of inequality (16) reveals the uncertainty associated with insufficient exploration. Maximizing an empirical estimate of the lower bound ensures that the optimization algorithm finds a trustworthy answer
(22) 
We shall now discuss the statistical basis of this learning principle.^{11}^{11}11The idea of maximizing the lower bound may surprise readers familiar with the UCB algorithm for multiarmed bandits (Auer et al., 2002). UCB performs exploration by maximizing the upper confidence interval bound and updating the confidence intervals online. Exploration in our setup results from the active system randomization during the offline data collection. See also section 6.4.
Comments
There are no comments yet.