Uplift modeling aims to directly model the incremental impact of a treatment/action on an individual response. In contrast to traditional classification techniques where one focused on directly predicting the response, uplift modeling focuses on estimating the net effect of a specific treatment by modeling the difference between one’s response before and after a particular treatment. A typical example appears in modern marketing. After a restaurant sends out coupons to passersby, some of them come to eat. Part of these customers are attracted by the coupons but the others may have already planned to eat there before receiving the coupon. Meanwhile among those who do not come, some are not interested in coupons but some may be annoyed to change their minds. In fact, what really matters to the restaurant is the difference between behaviors of the same person before and after he receives a coupon. Uplift modeling is also important in many other settings such as personalized recommendation, medical treatments , causal inference  and so on.
In this paper, we refer to a customer’s observable response after we take a specific action on him the action response, and the corresponding behavior when we take no action the natural response. Thus the main concern of an uplift modeling problem is the difference between action response and nature response of a customer, which is called uplift response with respect to a specific action.
Besides its wide applications, uplift modeling also receives much attention from the machine learning community. Traditional machine learning methods have been attempted to tackle the problem, such as K-Nearest Neighbors32]20]
and Random Forests
. However, there still exist two unsolved difficulties in previous works, which restrict these methods’ performance in applications. On the one hand, an unbiased evaluation metric for uplift modeling in this case remains missing. Some existing metrics, such as Qini coefficient and uplift curve, are only suitable for the cases of a single action and binary response. Such a lack of evaluation metric makes it difficult to carry out analysis using offline datasets. On the other hand, in uplift modeling, one typically does not know individual responses to an action and corresponding natural responses at the same time, which means the explicit labels of uplift response for specific features and actions are not available. Currently, tree-based methods are widely used to handle the lack of explicit labels [2, 9, 20, 17, 6, 34]
, but these methods need manually engineered features which are less automated in comparison to deep learning methods.
To handle the above two limitations, we propose a new evaluation uplift modeling metric for any number of actions and general response types (binary, discrete, continuous), which is a variant of inverse propensity score (IPS) for uplift modeling. We prove that it is an unbiased estimation of the uplift response. Then we reformulate the uplift modeling problem as a Markov Decision Process (MDP) and adopt the neuralized policy gradient method
to solve the problem. Such a deep reinforcement learning approach can automatically learn representations from the data, and requires no explicit label of each sample like supervised learning. It uses only a positive or negative reward to guide which action is good in the specific environment. And we further adopt the action-dependent baseline to reduce the variance of gradients of reinforcement learning, which has been shown to be effective in the recent works.
In experiments, we first verify the efficiency of our proposed metric by showing its average convergence rate and variance in multi-fold synthetic datasets. Then, our method, RLift, is adopted in extensive experiments on an open dataset, a synthetic dataset and real-world scenarios. All results show that our method can achieve significant improvement over state-of-the-art methods according to the evaluation of our proposed metric as well as traditional metrics.
It is worth noticing that the offline contextual bandit problem  has quite a similar setting but a different objective compared with uplift modeling. Both problems require a policy for deciding actions by taking advantages of offline data about individuals’ observable action response. But an offline contextual bandit problem asks a policy which maximizes the expected action response, while uplift modeling asks one maximizing the expected uplift response. With proper formalization, the optimal solution to an offline contextual bandit problem can be transformed to the optimal solution to a specific uplift modeling problem in closed form. However, current methods for both problems are seeking an approximation of the optimal solution, resulting in an inherent difference between them. We review methods mentioned in literatures for both problems in Section 1.1, and analyze the difference between them carefully in Section 2.4. In experiments, we also compare our method with one famous method solving the offline contextual bandit problem, Offset Tree. In precise, the output policy by Offset Tree is interpreted as one for uplift modeling and its performance is not as good as our method, coincides with our analysis.
1.1 Related Works
The most direct approach for uplift modeling is the Separate Model Approach (SMA), which uses separated model for each group of people receiving the same action, predicts the corresponding responses, then chooses the action with the maximum response . It can make use of any supervised learning methods and performs well when the uplift response is strongly correlated to action response. However, it performs badly when the uplift response follows a different distribution with the action response, which was illustrated in .
On the other hand, the variations of the decision tree-based methods model the uplift response directly in order to avoid the weakness. In the traditional decision tree, the algorithm chooses the attribute with maximum splitting criteria when growing each time, which aims to maximize the entropy after splitting [2, 9, 20, 7]. For example, the criteria in  is to maximize the difference of response signals between the child nodes, while the one in 
is to maximize the distributional difference of response signals between child nodes by weighted Kullback-Leibler divergence and weighted squared Euclidean distance. Previous state-of-art method also uses random forests, but it predicts the exact value of response instead of the value of uplift. They also proposed an unbiased evaluation metric for multiple treatments and general response type, but the metric is for the response of people after receiving treatments rather the difference between that of ones receiving treatments and receiving nothing.
Besides tree-based methods, the adaption of K-Nearest Neighbors (KNN) was considered for uplift modeling[alemi2009improved，su2012facilitating]
. KNN was used to find objects with similar features, then the action response on the similar objects. A logistic regression formulation is also proposed to explicitly include interaction terms between features and the action
. And Support Vector Machine (SVM) is considered to find hyperplanes in order to divide the feature space into positive, neutral and negative part affected by the action. Due to the lack of performance metrics, these methods mainly work in the single action and binary response case, and did not achieve stable performance in practice.
On the contrast, reinforcement learning has been famous for a lot of successful applications in many fields, such as the Go game , video games , and so on. One of its advantages is that it needs no explicit labels, but the reward signals to guide the training. One of reinforcement learning methods, policy gradient , is suitable for the episode task and receives the delayed rewards because it can calculate the gradient after the entire episode, as Feng and Zhang did on the task of relation classification and learning structured representation for text classification [4, 33].
Furthermore, variance reduction methods are believed to help the training process of a reinforcement learning . For example, advantage functions  are commonly used to reduce variances when estimating value functions. Recent works [29, 31, 14] show that the action-dependent advantage functions reduce the variance significantly, especially for discrete action space , which is adopted in our policy gradient approach.
As for the evaluation of uplift modeling through offline datasets, Qini coefficients and uplift curves are widely used [18, 20] for single action cases and perform well in practice, despite their lack of theoretical justification. For the multiple actions cases,  proposed a performance metric for expected action response, but not for the expected uplift response. In the field of reinforcement learning, IPS has been studied widely for offline policy evaluation [28, 27, 3]. Unlike previous uplift metric, IPS does not require the training samples to be collected randomly. However, there has not been a version of IPS for uplift modeling.
In addition, some works related to other topics are similar to uplift modeling, such as offline contextual bandit [28, 27, 1]. Offline contextual bandit pays attention to find a policy to maximize the expected response with the offline logged dataset. Based on the partial observable response,  transform the bandit problem into a cost-sensitive supervised learning problem. And with the help of IPS, the policy can be optimized directly, and a further self-normalized estimator helps reduce variance and avoid overfitting. These problems are very similar to ours, where the optimal solutions are the same, but the approximation solutions are not, which will be analyzed in Section 2.4.
On the other hand, in the field of causal inference, the uplift response of an action with respect to each single features sample is defined as individual treatment effect (ITE)[30, 22]. Causal inference community focuses on estimating such an effect of the single action for each specific feature vector accurately, while the metric we propose in this paper evaluates a policy choosing multiple actions by estimating the expected uplift response on whole features space.
1.2 Organization of the Article
In Section 2, we provide a formal definition and an unbiased metric for uplift modeling. In Section 3, we present our deep reinforcement learning design for uplift modeling. In Section 4, we compare our methods with several baselines on an open dataset, a group of simulation datasets and a real business dataset.
2 Uplift Modeling and an Unbiased Evaluation Metric
In this section, we first provide a formal definition of the uplift modeling problem, and then we propose a multi-action and general response type evaluation metric of uplift and prove its unbiasedness. Finally, we analyze the specialty of uplift modeling comparing with other similar problems.
2.1 Definition of Uplift Modeling for Multiple Actions and General Response Types
Firstly, we introduce basic elements and their notations in uplift modeling. When multiple actions are taken for individuals, we have
: the variable of individual’s feature vector. We use to denote its one realization. It usually represents the feature of one customer or one disease.
: the encoded action. Specifically means no action.
: a policy of choosing actions for each feature. We use to denote a realization that the policy selects action for , and
the corresponding probability for such a realization.
: the observed action response when receives action . Generally speaking, the response is a real number.
: the nature response of when receiving no action.
: the uplift response when receives action .
Now we can formally define the uplift modeling by the commonly used additive model.
, which means the observed action response contains two parts, the nature response and uplift response. The nature response is independent on the action, while the uplift response depends on the action. Naturally, and .
And the goal of uplift modeling is to find a policy to maximize the expected uplift response. Formally,
2.2 Uplift Modeling General Metric
Before we seek a policy for the uplift modeling problem, we need an unbiased metric of uplift response for any specific policy. This is because the uplift response can never be observed directly. Online experiments usually use A/B testing to estimate the uplift response, but the experiments may cost a lot. So we consider an offline evaluation for policies, which takes advantage of a dataset from a previous experiment. In precise, the dataset has samples in total, containing groups with different actions and a control group without any actions. Each individual with feature was once assigned into one of different groups by a policy , for example in the previous experiment action was taken on , and the corresponding response is recorded. For notational convenience, the probability that policy chooses an action for feature is denoted by , where . Note that unlike previous works on uplift modeling requiring the policy to be independent on feature , we only require that for is independent on .
Based on the dataset, we design an unbiased estimation of the uplift response for a specific policy . It is done by estimating the expected action responses for the policy (Lemma 1) and the nature response (Lemma 2) separately. Here we use to denote the probability of a random event, and is the 0/1 indicator function. And represents a realization that policy chooses action for while represents the fact that action is taken on in the dataset.
Given a policy , for each action , define a new random variable
, define a new random variable
The proof follows directly the one for Inverse Propensity Score  except dealing with each action separately through importance sampling. Then we have
Specifically, the nature response of customers with respect to an evaluated policy can also be estimated similarly. It is worthy to notice that although the total nature response of all customers is a constant (independent to the policy), we still need to estimate it accurately to complete our evaluation for the expected uplift response. And detailed analysis for its necessity is provided at the end of this section.
Given a policy , for each action , define a new random variable
The proof for Lemma 2 is similar to Lemma 1 except using the fact that for . Note that here we specifically mention the conditional expected nature response with respect to each action separately, because each of them are indeed dependent on the policy . Thus calculating the sample average of its realizations for corresponding samples will help determine rewards in our following design of policy gradient approach as action-based baselines (see Eqn.9).
Now we get Theorem 1. Intuitively, the action response in collected data can be regarded as the real response we will observe if we use a new policy to choose actions, after correcting the shift in action proportions between the old data collection policy and the new policy. Then the expected action responses for different actions and the expected nature responses can be estimated separately. And the difference between them is desired uplift response.
Given a policy , the expected uplift response under is
Let be the feature of -th customer and be the number of customers, then the difference between sample average of and
is an unbiased estimate of .
We call this unbiased estimator Uplift Modeling General Metric (UMG). According to Theorem 1 and Chebyshev’s inequality, suppose the variance of the UMG metric is bounded by , then with probability at least ,
which is an upper bound for UMG’s sample complexity.
Such an unbiased estimator can be performance metric for any uplift modeling with multiple actions and general response types. In other words, our objective is actually to find a policy with maximal UMG when applied to any specific dataset.
2.3 Self-Normalized Uplift Modeling General Metric
Suggested by , Self-Normalization Estimator is commonly used to control variance for estimators through importance sampling. Specific to our metric, we can adjust UMG to
Here is the -th individual’s feature in the dataset, while is the corresponding action taken on him. The idea of introducing Self-Normalization Estimator is to use the standardized weights modify the difference between the sample average and expected value of importance sampling weights. Based on the standard theory on ratio estimates, the bias is of order and can be ignored for large , i.e. it is asymptotic unbiased. And its variance is reduced (See  for a more refined analysis.) Thus, we call this metric Self-Normalized Uplift Modeling General Metric (SN-UMG).
Both UMG and SN-UMG can be evaluation metric to measure approaches for uplift modeling, and we compare their efficiency by simulation experiments in Section 4.3. At the same time, SN-UMG is further used to help estimate the rewards and corresponding Q-value in the training process of our approach (see Alg.1). Proper reward design is one of the most important factors for a successful reinforcement learning process, and such an asymptotic unbiased estimation help design exact rewards.
2.4 Relationship between Uplift Modeling and Offline Contextual Bandit Problem
Theoretically, the optimal policy that maximize the expected uplift response is also the optimal policy that maximize the expected action response, that is
There are also problems with objective of maximizing the expected action response. For example, the offline contextual bandit problem seeks the optimal policy of taking actions on individuals according to the dataset from a previous experiment. Similar to uplift modeling, for each individual (feature), only a specific action was taken, knowing the corresponding response. In terms of the optimal solution, the offline contextual bandit problem could be equivalent to uplift modeling by artificially dealing with data related to no-action as one ordinary action. However, since it is never possible to obtain the optimal policy in closed form, these two objectives are indeed different when seeking the approximated optimal policies.
Before we show their difference, we first define the performance of an approximated algorithm. Suppose the approximated algorithm take dataset as input and as output. The objective of it is to maximize and the optimal solution is denoted by . Then the performance of the algorithm is naturally defined by to what tend it approaches the optimal solution in the sense of objective, i.e.
Now we focus on the class of approximate algorithm which takes a dataset of structure similar to uplift modeling, i.e. , as input and as objective. And suppose that we are able to obtain data for both action responses and uplift responses, with for and with for as an ideal case. Then the algorithm ’s performance should be similar when taking and as input and corresponding objective separately, i.e. , and smaller than one of course.
On the other hand, when we evaluate the output policy of the former one by the objective of uplift response, we have
Here the first equality in the last line holds since the optimal solution is the same for these two problem, , as we see before. And whether the inequality in the second line holds depends on the expected nature response . We conclude the analysis formally in the following theorem.
If , then algorithm , taking input , has the same performance on the objective of expected uplift response and on the one of expected action response.
If , then algorithm , taken input , will always get worse performance on the objective of expected uplift response, compared with its performance on the objective of expected action response.
In other words, if the expect nature response is zero, uplift modeling and offline contextual bandit problem are equivalent. Otherwise, as in most common cases, the expected nature response is positive 111For example, there are always some customers coming to a restaurant even no discount any for food is provided., these two problems are inherent different in the sense of approximation solutions. And since we are considering uplift modeling problem, it is better to seek a solution directly related to the uplift response, rather than solving another problem which seems to be equivalent but turns out to be not.
3 Reinforcement Learning Method For Uplift Modeling
In this section, we first show how to reformulate the uplift modeling problem as an MDP problem by constructing an equivalent Markov chain for the problem. Then we show how to use policy gradient algorithm for solving uplift modeling problem in detail. The uplift modeling problem is particularly suitable for MDP and RL reformulation, since the exact uplift value is typically not provided for each individual sample, but we can estimate the average value for a batch experiment statistically according to Theorem1. Such a situation corresponds to receiving the delayed reward signals after an entire episode of MDP.
In summary, an overview of our approach for uplift modeling is illustrated in Fig. 1(a). It contains two parts, the actions selection and evaluation. The policy network chooses an action for each sample, then the output will be evaluated by the evaluation function. The policy network updates its parameter according to the result of evaluation iteratively.
3.2 MDP Model for Uplift Modeling
A Markov decision process is 4-tuple, (), where is the set of states. is the set of actions. is the probability that if we choose action at time in state , it will lead to at time , for all . is the immediate reward at the time , transiting from state to state , due to action .
We can model the uplift problem as an MDP.
Recall our definition of uplift modeling, Eqn.(2), it aims to maximize the average value of the uplift response.
Thus, at each time , the agent observes a state (i.e, user’s feature ), and its policy chooses an action (i.e one action or no action), and gets the reward (the uplift response for the user after receiving the action).
The transition probability from to is independent on the state and action. are equal for all the pairs (, ), .
Formally, we define , if we have actions.
In the uplift modeling, is always the same, and the stationary distribution of this constructed Markov chain is a uniformly random distribution over all samples in the dataset. Thus sampling from the Markov chain is equivalent to sampling uniformly from the dataset, and independent on the action . The Fig. 1(b) shows the MDP of uplift modeling.
State: State in MDP is object’s features in uplift modeling. Action: MDP has the same action set as original uplift modeling, . The policy decides which action to choose to maximize the reward. We sample the action . We adopt a softmax function as the policy function.
Transition: Each time the state may transit to any state with equal probability.
3.3 Policy Gradient Method
Reinforcement learning agents learn to maximize their expected future rewards from interaction with an environment. Each time, we can choose an action when the current state is according to a policy , , where
is parameter of the neural network. We can evaluate the policyaccording to their long-term expected reward,
Therefore, we can directly optimize the parameter to maximize . According to the policy gradient theorem , we can calculate the gradient by:
where is state-action value given a policy and is defined as
where is the next state and is state-value started with , defined as
We denote for convenience. According to Eqn. (5), the key for calculating the gradient is to know . Thus we introduce how to estimate specific to the uplift modeling as follows:
contains two part, and . The can be estimated by calculating the SN-UMG. In each episode, we randomly sample batches of samples according the MDP, , to estimate the value. is hard to know, so we expect to approximate it. According to Theorem 1, we can find: (1) If , then the response has the positive impact on the result of action . (2) If and , then has the negative impact on the result of action , (3) For other cases, has no impact on the result, so that we do not consider these cases. Thus, we set as the estimation for the uplift response of this specific sample according to UMG metric
In addition, in order to accurately estimate the value , the size of each batch need to be large. We need to subtract the baselines to reduce the variance. We also adopt the action-dependent baseline to reduce the variance, which is shown to be effective recently, that is
Here and is the sample average of the conditional expectation for with respect to action as we introduced in Lemma 2, which can be calculated in the process of SN-UMG. is the average value of multiple batches in order to estimate the accurately. Finally, we optimize the for each batch ,
The whole algorithm is shown in Algorithm. 1.
4.1 Experiment Setup
We first introduce the open dataset, simulation dataset and real business dataset we used to evaluate our method compared with other baselines.
Kevin Hillstrom’s MineThatData blog  is an open dataset containing results of an e-mail campaign for an Internet-based retailer. It contains information about 64,000 customers with basic marketing information such as the amount of money spent in the previous year or when the last purchase was made. We use the part of the dataset which containing the visiting response and women’s advertisement, because the men’ advertisements are ineffective and purchasing signals are sparse. We use it for the single action and binary response experiment.
For the multiple actions and general response type experiments, there is no open dataset large enough, thus we generate a simulation dataset. The generation algorithm is a modified version in .  proposed a method based on the decision tree, so the uplift value of different actions in their dataset depends on only one attribute, while our method has no such requirement.
The features space is a 50-dimensional hyper-cube of length 10, or =
. Features are uniformly distributed in the feature space, i.e.,, for . There are four different actions, . The response under each action is defined as below.
The action response consists of the nature response (), the uplift response (
) and white noise (). Both and are in the form of , but with different parameters (). In our experiment, we set , and for . And a group of , and for is randomly chosen for the nature response . Then for each action, a new group of , and for is randomly chosen for the corresponding uplift response . Finally is set to be the zero-mean Gaussian noise, i.e. . We set . We generate 500,000 samples for each action and a control group () with 500,000 samples. Thus, there are 2500,000 samples in total.
Real Business Dataset
We also evaluate our methods on the real business dataset from a company. The dataset is selected from its marketing records for its new service in September 2017, when coupons of different types were sent to customers to attract them to use the service and further become long-term memberships. The type of each coupon was randomly chosen with equal probabilities for all levels, independent of customers’ features. We took 620,000 samples of these customers’ features (264 related attributes, such as one’s resident, age, gender and so on), types of their received coupons (actions), and their response (whether they used the service, denoted by , and whether they pay for long-term memberships, denoted by ). The ratio of the positive and negative sample is close 1 : 200.
We compare our methods with several baselines on both single action and multiple actions.
Separate Models Approach (SMA)  Using a separate model for each group of people receiving the same action and predicting the response given each specific action and features. Choose the actions with the largest response. It can be applied to multiple actions and general responses. We consider both Random Forest and neural network as the separate models, denoted by SMA-RF and SMA-NN.
Random Forests (Uplift RF)  Using a specific function as the splitting criteria for random forests. We use the package implemented in R, which can only be applied on single action and binary response.
Offset Tree (OT)  Reducing the problem into binary classification and reusing fully supervised binary classification algorithms (also known as base algorithm). We use the Python package contextualbandits which requires more than two actions. And we consider both Random Forest and Logistic Regression as the base algorithms, denoted by OT-RF and OT-LR.
4.1.3 Parameter Selection
We evaluate our model by using three-fold cross validation. The neural network of the policy gradient is three-layers fully connected network with the hidden layer being a size of 512 for the open dataset and synthetic dataset, and a size of 1024 for the real-world business data. Each episode, we take 10 random batches and each batch contains 10000 examples. The splitting ratio of training, validating and testing is 0.6, 0.2, 0.2. We stop the training when the model could not achieve a better result on the validate dataset within 1000 episodes. The learning rate is 0.1.
4.2 Efficiency of UMG and SN-UMG
In this subsection, we verify our proposed metric UMG and SN-UMG on the synthetic datasets and show the convergence of our two proposed metrics.
The dataset for experiments is generated by the method we introduce in the Section 4.1.1. The action set is also . We adopt two kinds of policies for action offering in two experiments:
Uniform: Each action is offered with the equal probability. That is, .
Policy: We choose first five attributes of each X, . The probability of choosing the action is proportional to . That is,
We test the errors between the real uplift response and the estimations of uplift response from UMG and SN-UMG with different sizes of data, in order to test its convergence efficiency. When the size of data is fixed, we run 10 times of experiments to estimate the mean and variance of two metrics, which are shown in Fig. 2 and Fig. 3. The Fig. 2 shows the convergence curve of UMG and SN-UMG on the uniform dataset. The Fig. 3 The convergence curve of UMG and SN-UMG on the policy dataset. We can find that SN-UMG performs better in both experiments than UMG, on the accuracy and stability. When the size of data is over 10000, both metrics almost converge. Both metrics perform better on the uniform data than on the policy data, while the variances of SN-UMG are relatively lower than the ones of UMG.
4.3 Single Action and Binary Response
We compare our method with SMA-RF, SMA-NN and Uplift RF. The dataset is MineThatData dataset. Methods are evaluated not only on our proposed UMG metric, but also on the Qini curve and Qini coefficient. Basically, the larger the area of one policy’s Qini curve is, the better its performance is. And Qini coefficient is calculated by this area first subtracted by the area of a random policy, then divided by a constant only related to the dataset.
And evaluating one policy by Qini curve requires the probability that it chooses to take the single action , instead of its binary output (take or not), for each individual, thus we adjust our method’s output when evaluated by Qini. In precise, we adopt multiple discrete actions () to represent different probabilities of offering the single action in this experiment. For example, when our policy chooses action () for one sample, it means the probability of taking action for this sample is between and . In our experiment, .
The results are shown in Table 1 and we also plot the Qini curves for methods with large Qini coefficients in Fig. 4. Uplift RF performs better than SMA-RF significantly indicates that the method designed for uplift modeling is more effective on equal conditions. SMA-NN gains the best results except for RLift manifests that it’s necessary to import more powerful model, like neural network, to the uplift modeling. Our method can achieve the state-of-the-art performance on both the SN-UMG metric and Qini coefficient, because we model the uplift signal and adopt deep learning method simultaneously.
It is worth noticing that fluctuations occur in Qini curves, for RLift, SMA-RF and SMA-NN. This is because these methods are optimized based on the whole dataset of samples, while the Qini curve shows some intermediate results for individual samples. Since the final objective is measured by the area between its Qini curve and the horizontal axis, we conjecture there may be a trade-off between the final results and intermediate result. On the other hand, Qini curve can be only applied on the case of binary response and binary action while our method is not limited to it. Thus it is not suitable for further experiments and we just show the relationship between our method and previous works in this experiment.
4.4 Multiple actions and General Response
For uplift modeling with multiple actions, only Uplift KNN, SMA-RF, and SMA-NN can be applied as baselines. For the dataset, there are not large enough open dataset, so we generate the synthetic dataset, among which the optimal results are known. The evaluation metric is SN-UMG.
The results are shown in Table 1. Our method performs much better than the baselines. When the relation between responses and features are extremely complicated, the advantage of RLift is more obvious, compared with other baselines. Also, when the size of the dataset is large enough, the performance of RLift is very close to the optimal result.
4.5 Real Business Experiments
We also test our method on the real-world business dataset, evaluated through SN-UMG. It contains records of multiple actions with binary response, thus we use SMA-RF and SMA-NN as baselines. Uplift KNN is not considered due to its low efficiency when deploying in the environment of big data.
And in the dataset, there are two kinds of binary responses (), related to two objectives in practice. Thus we conduct two kinds of experiments in this part. Firstly, we consider the uplift value of these two binary responses as the objectives separately, thus it is two experiments of multiple actions and binary response. Table. 3 shows the results of the experiment on each objective. Since the positive sample is extremely sparse in the dataset, mentioned above, the performances of SMA-RF and SMA-NN are very bad. Random performs relatively well. By contrast, RLift performs robustly in this task, because it uses the statistic results as rewards to guide training, which can reduce the negative effects of the sparsity of the dataset.
Secondly, we consider the weighted combination of two responses as a single objective, thus it is the experiment of multiple actions and general response. Such an objective is common in a practical business where multiple objectives are concerned by companies simultaneously. Table. 4 shows the results of the experiments on the objectives with different weights. The weight of is always larger than the one of conforms to the actual demand. In these tasks, RLift reveals its flexible ability on multi-objective tasks and shows its adaptation to complicated tasks. All the results show that our model can achieve the superiority compared with Random, SMA-RF and SMA-NN.
In this paper, we propose a new evaluation metric of multiple actions and general response types for the uplift modeling and prove its unbiasedness. And we solve the uplift modeling problem through a deep reinforcement learning method. During the training process, the variance for estimating rewards and Q-value is further reduced by taking advantages of unbiased metric as well as action-dependent baselines. Compared with current methods on open, synthetic and real datasets, our method achieves the state-of-the-art performance under new proposed metric as well as traditional metric.
-  Alina Beygelzimer and John Langford. The offset tree for learning with partial labels. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 129–138. ACM, 2009.
David Maxwell Chickering and David Heckerman.
A decision theoretic approach to targeted advertising.
Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pages 82–88. Morgan Kaufmann Publishers Inc., 2000.
-  Miroslav Dudík, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. arXiv preprint arXiv:1103.4601, 2011.
-  Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. Reinforcement learning for relation classification from noisy data. In Proceedings of AAAI, 2018.
-  Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471–1530, 2004.
-  Leo Guelman, Montserrat Guillén, and Ana M Pérez-Marín. A survey of personalized treatment models for pricing strategies in insurance. insurance: Mathematics and Economics, 58:68–76, 2014.
-  Leo Guelman, Montserrat Guillén, and Ana M Pérez-Marín. Uplift random forests. Cybernetics and Systems, 46(3-4):230–248, 2015.
-  Pierre Gutierrez and Jean-Yves Gérardy. Causal inference and uplift modelling: A review of the literature. In International Conference on Predictive Applications and APIs, pages 1–13, 2017.
-  Behram Hansotia and Brad Rukstales. Incremental value modeling. Journal of Interactive Marketing, 16(3):35, 2002.
-  Maciej Jaskowski and Szymon Jaroszewicz. Uplift modeling for clinical trial data. In ICML Workshop on Clinical Data Analysis, 2012.
-  Hillstrom Kevin. The minethatdata e-mail analytics and data mining challenge. http://blog.minethatdata.com/2008/03/minethatdata-e-mail-analytics-and-data.html, 2008. Accessed April 4, 2018.
-  Augustine Kong. A note on importance sampling using standardized weights. University of Chicago, Dept. of Statistics, Tech. Rep, 348, 1992.
-  Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436, 2015.
-  Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, and Qiang Liu. Action-dependent control variates for policy optimization via stein identity. 2018.
-  Victor SY Lo. The true lift model: a novel data mining approach to response modeling in database marketing. ACM SIGKDD Explorations Newsletter, 4(2):78–86, 2002.
-  Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
-  Nicholas J Radcliffe and Patrick D Surry. Real-world uplift modelling with significance-based uplift trees. White Paper TR-2011-1, Stochastic Solutions, 2011.
-  NJ Radcliffe. Using control groups to target on predicted lift: Building and assessing uplift models. Direct Market J Direct Market Assoc Anal Council, 1:14–21, 2007.
-  Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55, 1983.
-  Piotr Rzepakowski and Szymon Jaroszewicz. Decision trees for uplift modeling. In Data Mining (ICDM), 2010 IEEE 10th International Conference on, pages 441–450. IEEE, 2010.
-  Piotr Rzepakowski and Szymon Jaroszewicz. Uplift modeling in direct marketing. Journal of Telecommunications and Information Technology, pages 43–50, 2012.
-  Uri Shalit, Fredrik Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. arXiv preprint arXiv:1606.03976, 2016.
-  David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
-  Xiaogang Su, Joseph Kang, Juanjuan Fan, Richard A Levine, and Xin Yan. Facilitating score and causal inference trees for large observational studies. Journal of Machine Learning Research, 13(Oct):2955–2994, 2012.
-  Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. 2011.
-  Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
-  Adith Swaminathan and Thorsten Joachims. Counterfactual risk minimization: Learning from logged bandit feedback. In International Conference on Machine Learning, pages 814–823, 2015.
-  Adith Swaminathan and Thorsten Joachims. The self-normalized estimator for counterfactual learning. In Advances in Neural Information Processing Systems, pages 3231–3239, 2015.
-  George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E Turner, Zoubin Ghahramani, and Sergey Levine. The mirage of action-dependent baselines in reinforcement learning. arXiv preprint arXiv:1802.10031, 2018.
-  Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, (just-accepted), 2017.
-  Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M Bayen, Sham Kakade, Igor Mordatch, and Pieter Abbeel. Variance reduction for policy gradient with action-dependent factorized baselines. arXiv preprint arXiv:1803.07246, 2018.
-  Łukasz Zaniewicz and Szymon Jaroszewicz. Support vector machines for uplift modeling. In Data Mining Workshops (ICDMW), 2013 IEEE 13th International Conference on, pages 131–138. IEEE, 2013.
-  Tianyang Zhang, Minlie Huang, and Li Zhao. Learning structured representation for text classification via reinforcement learning. AAAI, 2018.
-  Yan Zhao, Xiao Fang, and David Simchi-Levi. Uplift modeling with multiple treatments and general response types. In Proceedings of the 2017 SIAM International Conference on Data Mining, pages 588–596. SIAM, 2017.