Beta Survival Models

05/09/2019 ∙ by David Hubbard, et al. ∙ Netflix 14

This article analyzes the problem of estimating the time until an event occurs, also known as survival modeling. We observe through substantial experiments on large real-world datasets and use-cases that populations are largely heterogeneous. Sub-populations have different mean and variance in their survival rates requiring flexible models that capture heterogeneity. We leverage a classical extension of the logistic function into the survival setting to characterize unobserved heterogeneity using the beta distribution. This yields insights into the geometry of the problem as well as efficient estimation methods for linear, tree and neural network models that adjust the beta distribution based on observed covariates. We also show that the additional information captured by the beta distribution leads to interesting ranking implications as we determine who is most-at-risk. We show theoretically that the ranking is variable as we forecast forward in time and prove that pairwise comparisons of survival remain transitive. Empirical results using large-scale datasets across two use-cases (online conversions and retention modeling), demonstrate the competitiveness of the method. The simplicity of the method and its ability to capture skew in the data makes it a viable alternative to standard techniques particularly when we are interested in the time to event and when the underlying probabilities are heterogeneous.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Survival modeling, customer lifetime value (Gupta et al., 2006) and product ranking (Rudin et al., 2012; Chang et al., 2012) are of practical interest when we want to estimate time until a specific event occurs or rank items to estimate which will encounter the event first. Traditionally leveraged in medical applications, today survival regression is extensively used in large-scale business settings such as predicting time to conversion in online advertising and predicting retention (or churn) in subscription services. Standard survival regression involves a maximum likelihood estimation problem over a specified continuous distribution of the time until event (exponential for the Accelerated Failure Time model (Kalbfleisch and Prentice, 2011)) or of the hazard function (in the case of Cox Proportional Hazards (Cox, 1972)). In practice, time to event problems are often converted to classification problems by choosing a fixed time horizon which is appropriate for the application at hand. One then has to balance training the model on recent data against a longer labeling horizon which might be more desirable. Survival models avoid this trade-off by relying on right-censoring. This maps it to missing data problem where not all events are observed due to the horizon over which the data is collected.

There is evidence (which will be further demonstrated in this article) of the importance of heterogeneity in a variety of real-world time to event datasets. Heterogeneity indicates that items in a data-set have different survival means and variances. For instance, heterogeneity in a retention modeling context would be that as time increases the customers with the highest probability to retain are the ones which still remain in the dataset. Without considering this effect, it might appear that the baseline retention probability has increased over time when in fact the first order effect is that there is a mover/stayer bias. Thus methods which don’t consider multiple decision points can fail to adequately account for this effect and thus fall victim to the so called ruse of heterogeneity (W. Vaupel and Yashin, 1985).

Consider the following example inspired by Porath (Ben-Porath, 1973) where we have 2 groups of customers, one in which the customers have a retention probability of and in the other the customers are uniformly split between retention probabilities of either or

. In this case after having observed only one decision point we would observe the retention probabilities of the two groups to be identical. However, if we consider multiple decision points it becomes clear that the latter population has a much higher long term retention rate because some customers therein retain to infinity. In order to capture this unobserved heterogeneity we need a distribution that is flexible enough to capture these dynamics and ideally as simple as possible. To that end we posit a beta prior on our instantaneous event probabilities. The beta has only 2 parameters, yet is flexible enough to capture right/left skewed, U-shaped, or normal distributions.

Figure 1. Heterogeneity gives rise to different survival distributions and rankings.

showing the way different densities evolve over time, and how in the end we can improve our rankings as a result of learning this information.

Consider the example in Figure 1

. A data-set contains three heterogeneous items (green dots, orange plus and blue cross). These items are each characterized by beta distributions (left panel). At each time period, each item samples a Bernoulli distributed coin from its beta distribution and flips it to determine if the item will retain. In the middle panel, we see the retention of the items over time and in the right-mode panel we see the ranking of the items over time. Even though the items are sampling from fixed beta distributions, the ranking of which item is most at risk over time changes. Thus, a stationary set of beta distributions lead to non-stationary survival and rankings. Such nuance cannot be captured by summarizing each item with only a point-estimate of survival (as opposed to a 2-parameter beta distribution).

Due to the discrete and repeat nature of the decision problems over time, we leverage a geometric hypothesis to recover survival distributions. We estimate the parameters of this model via an empirical Bayes method which can be efficiently implemented through the use of analytical solutions to the underlying integrals. This model termed the beta-logistic was first introduced by Heckman and Willis

(Heckman and Willis, 1977), and was also studied by Fader and Hardie (Fader and Hardie, 2007). We find that in practice this model fits the discrete decision data quite well, and that it allows for accurate projections of future decision points.

We extend the beta-logistic model to the case of large-scale trees or neural-network models that adjust the beta distribution given input covariates. These leverage the use of recurrence relationships to efficiently compute the gradient. Through the beta prior underpinning the model, we show empirically that the beta-logistic is able to model a wide range of heterogeneous behaviors that other survival or binary classification models fail to capture, especially in the presence of skew. As we will see, the beta-logistic model outperforms a typical binary logistic regression in real-world examples, and provides tighter estimated posteriors compared to a typical Laplace approximation.

We also present theoretical results on ranking with beta distributions. We show that pairwise comparisons between beta distributions can be summarized by the median of the two distributions. This makes ranking with beta distributions a provably transitive problem (pairwise distribution comparisons are generally non-transitive (Savage, 1994)). Therefore, using the medians of beta distributions allows us to rank the entire population to see which subscribers or which items are most-at-risk. The results are then extended to the case where we rank items across multiple time horizons by approximating the evolution of the survival distribution over time as a product of beta distributions as in (Fan, 1991). Thus we obtain a consistent ranking of items which evolves over time and is summarized by medians (rather than means) to improve the accuracy of ranking who is most-at-risk.

This paper is organized as follows. We first show the beta-logistic derivation as well as reference the recursion formulas which make the computation efficient. We also make brief remarks about convexity and observe that in practice we rarely encounter convergence issues. We then present several simulated examples to help motivate the discussion. This is followed by an empirical performance evaluation of the various models across three large real-world datasets: a sparse online conversion dataset and two proprietary datasets from a popular video streaming service involving subscription and viewing behaviors. In all the examples, the beta-logistic outperforms other baseline methods, and seems to perform better in practice regardless of how many attributes are considered. Even though there will always be unobserved variations between individuals that influence the time-to-event, this empirical evidence is consistent.

2. The Beta-Logistic for Survival Regression

2.1. Model derivation

Denote by a dataset where are covariates, is the discrete time to event for an observed (i.e. uncensored) datapoint () and is the right-censoring time for a datapoint for which the event of interest hasn’t happened yet (. A traditional survival model would posit a parametric distribution and then try to maximize the following empirical likelihood over a class of functions :

Unfortunately, unless constrained to a few popular distributions such as the exponential one, the maximization of such a quantity is usually intractable for most classes of functions .

Let us instead assume that at each discrete decision point, a customer decides to retain with some (point-estimate) probability where is some function of the covariates . Then we further assume that the instantaneous event probability at decision point

is characterized by a shifted geometric distribution as follows:

This then gives the following survival equation:

(1)

This geometric assumption follows from the discrete nature of the decisions customers need to make in a subscription service, when continuing to next episodes of a show, etc. It admits a a simple and straightforward survival estimate that we can also use to project beyond our observed time horizon. Now in order to capture the heterogeneity in the data, we can instead assume that follows a conditional beta prior as opposed to being a point-estimate as follows:

where and are some arbitrary positive functions of covariates (e.g. measurements that characterize a specific customer or a specific item within the population).

Consider the Empirical Bayes method (Gelman et al., 2013) (also called Type-II Maximum Likelihood Estimation (Berger, 2013)) as an estimator for and given the data:

where

(2)

Using the marginal likelihood function we obtain:

As we will see in the next section, a key property of the beta-logistic model is that it makes the maximization of Equation 2 tractable.

Since and have to be positive to define valid beta-distributions, we use an exponential reparameterization and aim to estimate functions and such that:

Throughout the paper, we will also assume that and are twice-differentiable.

The name beta-logistic for such a model has been coined by (Heckman and Willis, 1977) and studied when the predictors and are linear functions. In this case, at observe that if we want to estimate the mean this reduces to an over-parameterized logistic regression:

2.2. Algorithm

We will now consider the general case where and

are nonlinear functions and could be defined by the last layer of a neural network. Alternatively, they could be generated by a vectored-output Gradient Boosted Regression Tree (GBRT) model. Using properties of the beta function (see

A.1), one can show that:

and

Further, the following recursion formulas hold for :

(3)

and

(4)

If we denote as the function we wish to minimize, Equation 3 and Equation 4 allow us to derive (see Appendix A.2) recurrence relationships for individual terms of and

. This makes it possible for example to implement a custom loss gradient and Hessian callbacks in popular GBRT libraries such as XGBoost

(Chen and Guestrin, 2016) and lightGBM (Ke et al., 2017). In this case, the GBRT models have ”vector-output” and predict for every row both and

jointly from a set of covariates, similarly to how the multinomial logit loss is implemented in these libraries. More precisely, choosing a squared loss for the split criterion in each node as is customary, the model will equally weight how well the boosting residuals (gradients) with respect to

and are regressed on.

Note that because of the inherent discrete nature of the beta-logistic model, the computational complexity of evaluating its gradient on a given datapoint is proportional to the average value of

. Therefore, a reasonable time step discretization value needs to be chosen to properly capture the survival dynamics while allowing fast inference. One can similarly implement this loss in deep learning frameworks. One would typically explicitly pad the label vectors

with zeros up to the max censoring horizon (which would bring average computation per row to for the mini-batch) so that computation can be expressed through vectorized operations, or via frameworks such as Vectorflow (Rostykus and Raimond, 2018)

or Tensorflow

(Abadi et al., [n. d.])

ragged tensors that allow for variable-length inputs to bring the computational cost back to

.

2.3. Convexity

For brevity, define and . In the special case where and are linear functions, their second derivative is null and the Hessian of the log-likelihood Equation 2 is diagonal:

and

(5)

where

We see that the log-likelihood of the shifted-beta geometric model is always convex in when is linear. Further we can see that when all points are observed (no censoring), and the maximum horizon is then Equation 5 is also convex in .

Subsequent terms are not convex, however, but despite that in practice we do not encounter significant convexity issues (e.g. local minima and saddle points). It seems likely that in practice the convex terms of the likelihood dominate the non-convex terms. Note once again that there is generally no global convexity of the objective function.

3. Ranking with the Beta-Logistic

Given beta distributions another relevant question for the business is how do we rank them from best to worst? This is crucial if, for instance, products need to be ranked to prioritize maintenance as in (Rudin et al., 2012). For example, we may be interested in ranking beta distributions for news articles where we are estimating the probability that an article will be clicked on. Or for a video subscription service we could have beta distributions over titles each of which represents the probability that this title will be watched to the final episode (title survival). How do we rank the titles (articles) from most watchable (readable) to least watchable? Let us abstractly view both problems as interchangeable where we are ranking items.

We have two items ( and ), each with their own parameters that define a beta distribution. Each beta distribution samples a coin with a probability of heads (retaining) and for tails (churning out). In the first time step, item retains less than if it has a higher coin flip probability, e.g. or the probability is larger than 50%. In the case of integer parameters, this is given by the following integral:

which simplifies (Miller, 2015) into

(6)

If the quantity is larger than 0.5, then item retains less than item in the first time step. It turns out, under thousands of simulations, the less likely survivor is also the one with the larger median . Figure 2 shows a scatter plot as we randomly compare pairs of beta distributions. It is easy to see that the difference in their medians agrees with the probability . So, instead of using a complicated formula to test , we just need to compare the medians via the inverse incomplete beta function (betaincinv) denoted by and see if . A proof of this is below.

Figure 2. The medians of beta distributions are consistent with the pairwise probabilistic ranking of beta distributions.

medians are consistent with pairwise ranking.

For later time steps, we will leverage a geometric assumption but applied to distributions rather than point estimates. The item which retains longer is the one with the lower product of repeated coin flip probabilities, i.e.

. In that case, the beta distributions get modified by taking them to powers. Sometimes, the product of beta-distributed random variables is beta-distributed

(Fan, 1991) but other times it is almost beta distributed. In general, we can easily compute the mean and variance of this power-beta distribution to find a reasonable beta approximation to it with its own parameters. This is done by leveraging the derivation of (Fan, 1991) as follows. Assume we have a beta distribution and define the new random variable

. We can then derive the first moment as

and the second moment as

Then, we approximate the distribution for the future event probabilities by a beta distribution where

(7)

Therefore, in order to compare who is more likely to survive in future horizons, we can combine Equation 7 and Equation 1 to find the median of the approximated future survival distribution.

Theorem 3.1 ().

For random variables and for , if and only if (the median of is larger than the median of ).

Proof.

We first prove that the median gives the correct winner under simplifying assumptions when both beta distributions have the same or the same .

First consider when the distributions have the same and different and . In that case, Equation 6 simplifies to

(8)

The formulas for and only differ in their denominators. Then, if it is easy to show that

(9)

Therefore, if and only if .

Similarly, if , the medians satisfy . This is true since, if all else is equal, increasing the parameter reduces the median of a beta distribution. Therefore, for , the median ordering is always consistent with the probability test.

An analogous derivation holds when the two distributions have the same and different and . This is obtained by using the property . Therefore, for , the median ordering is always consistent with the probability test.

Next we generalize these two statements to show that the median ordering always agrees with the probability test. Consider the situation where . Due to the scalar nature of the median of the beta distribution, we must have transitivity. We must also have . Since each pair of inequalities on medians requires that the corresponding statement on the probability tests also holds, the overall statement must also imply that . ∎

Therefore, thanks to Theorem 3.1, we can safely rank order beta distributions simply by considering their medians. These rankings are not only pairwise consistent but globally consistent. Recall that pairwise ranking of distributions does not always yield globally consistent rankings as popularly highlighted through the study of nontransitive dice (Savage, 1994). Thus, given any beta distribution at a particular horizon, it is straightforward to determine which items are most at risk through a simple sorting procedure on the medians. Given this approach to ranking beta distributions, we can show the performance of our model-based ranking of users or items by holding out data and evaluating the time to event in terms of the AUC (area under the curve) of the receiver operating characteristic (ROC) curve.

4. Empirical Results

4.1. Synthetic simulations

We present simulation results for the beta-logistic, and compare them to the logistic model. We also show that the beta-logistic successfully recovers the posterior for skewed distributions. In our first simulation we have 3 beta distributions which have very different shapes (see table 1 below), but with the same mean (this example is inspired by Fader and Hardie (Fader et al., 2018)). Here, each simulated customer draws a coin from one of these distributions, and then flips that coin repeatedly until they have an event or we reach a censoring horizon (in this particular case we considered 4 decision points).

shape
normal 4.75 14.25 0.25
right skewed 0.5 1.50 0.25
u shaped 0.25 0.25
Table 1. Heterogeneous beta distributions with identical means.

It is trivial to show that the logistic model will do no better than random in this case, because it is not observing the dynamics of the survival distribution which reveal the differing levels of heterogeneity underlying the 3 populations. If we allow the beta-logistic model to have a dummy variable for each of these cases then it can recover the posterior of each (see Figure 

3). This illustrates an important property of the beta-logistic: it recovers posterior estimates even when the data is very heterogeneous and allows us to fit survival distributions well.

Figure 3. Survival distributions as a function of time as well as an estimate of from the beta-logistic. Using a point-estimate of the mean (as in the logistic model) fails to recover the heterogeneity.

Simulation 1

To create a slightly more realistic simulation, we can include another term which increases the homogeneity linearly in and we add this as another covariate in the models. We also inject exponential noise into the and used for our random draws. Now, the logistic model does do better than random when there is homogeneity present (see Figure 4), however it still leaves signal behind by not considering the survival distribution. We additionally show results for a one time step beta logistic which performs similarly to the logistic model. However it seems to have a lower variance which perhaps indicates that its posterior estimates are more conservative, a property which will be confirmed in the next set of experiments.

Figure 4. The level of heterogeneity increases from the left panel to the right panel as we add a linear term in . Clearly, the mean of the beta-logistic 1 step (magenta plus), and logistic (cyan dot) are nearly identical, but the beta-logistic (orange cross) considers more survival information and outperforms both even when there is considerable homogeneity.

Simulation 2

4.2. Online conversions dataset

4.2.1. Survival modeling

We now evaluate the performance of the beta logistic model on a large-scale sparse dataset. We use the Criteo online conversions dataset published alongside (Chapelle, 2014) and publicly available for download111http://labs.criteo.com/2013/12/conversion-logs-dataset/. We consider the problem of modeling the distribution of the time between a click event and a conversion event. We will consider a censoring window of 12 hours (61% of conversions happen within that window). As noted in (Chapelle, 2014)

, the exponential distribution fits reasonably well the data so we will compare the beta-logistic model against the exponential distribution (1 parameter) and the Weibull distribution (2 parameters). Since the temporal integration of the beta-logistic model is intrinsically discrete, we consider a time-discretization of 5 minute steps. We also add as baselines 2 logistic models: one trained at a horizon of 5 minutes (the shortest interval), and one trained at a horizon of 12 hours (the largest window). All conditional models are implemented as sparse linear models in Vectorflow

(Rostykus and Raimond, 2018)

and trained through stochastic gradient descent. All survival models use an exponential reparameterization of their parameters (since the beta, exponential, and Weibull distributions all require positivity in their parameters). Censored events are down-sampled by a factor of 10x. We use 1M rows for training and 1M (held-out in time) rows for evaluation. The total (covariate) dimensionality of the problem is 102K after one-hot-encoding. Note that covariates are sparse and the overall sparsity of the problem is over 99.98%. Results are presented in Figure 

5.

Figure 5. AUC as a function of censoring horizon for the various models considered.

Online conversion experiment 1

The beta-logistic survival model outperforms other baselines at all horizons considered. Even though it is a 2-parameter distribution, the Weibull model is interestingly performing worse than the exponential survival model and the binary logistic classifier. We hypothesize that this is due to the poor conditioning of its loss function as well as the numerical instabilities during gradient and expectation computation (the latter requires function calls to the gamma function which is numerically difficult to estimate for moderately small values and for large values).

4.2.2. Posterior size comparison

We next consider the problem as a binary classification task (did a conversion happen within the specified time window?). It is interesting to compare the confidence interval sizes of various models. For the conditional beta-logistic model, the prediction variance on datapoint

is given by:

For a logistic model parameterized by , a standard way to estimate the confidence of a prediction is through the Laplace approximation of its posterior (MacKay, 2003). In the high-dimensional setting, estimating the Hessian or its inverse become impractical tasks (storage cost is and matrix inversion requires computation). In this scenario, it is customary to assume independence of the posterior coordinates and restrict the estimation to the diagonal of the Hessian , which reduces both storage and computation costs to . Hence under this assumption, for a given datapoint the distribution of possible values for the random variable is also Gaussian with parameters:

If the full Hessian inverse is estimated, then is Gaussian with parameters:

When is Gaussian, the logistic model prediction

has a distribution for which the variance can be conveniently approximated. See (Li et al., 2012) for various suggested approximations schemes. We chose to apply the following approximation

Armed with this estimate for the logistic regression posterior variance, we run the following experiment: we random-project (using Gaussian vectors) the original high-dimensional data into a 50-dimensional space, in which we train beta-logistic classifiers and logistic classifiers at various horizons, using 50k training samples every time. We then compare the average posterior size variance on a held-out dataset containing 50k samples. Holdout AUCs were comparable in this case for both models at all horizons. Two posterior approximations are reported for the logistic model: one using the full Hessian inverse and the other one using only the diagonalized Hessian. Results are reported in Figure 

6.

Figure 6. The posterior variance of beta-logistic binary classifiers as well as logistic regressions trained on binary labels datasets with increasing censoring windows.

Online conversion experiment 2

Note that the beta-logistic model produces much smaller uncertainty estimates (between 20% and 45% smaller) than the logistic model with Laplace approximation. Furthermore, the growth rate as a function of the horizon of the binary classifier is also smaller for the beta-logistic approach. Also note that the Laplace posterior with diagonal Hessian approximation underestimates the posterior obtained using the full Hessian. Gaussian posteriors are obviously unable to appropriately model data skew.

This empirical result is arguably clear evidence of the superiority of the posteriors generated by the beta-logistic model over a standard Laplace approximation estimate layered onto a logistic regression model posterior. The beta-logistic posterior is also much cheaper to recover in terms of computational and storage costs. This also suggests that the beta-logistic model could be a viable alternative to standard techniques of explore-exploit models in binary classification settings.

4.3. Video streaming subscription dataset

4.3.1. Retention modeling

We now study the problem of modeling customer retention for a subscription business. We leverage a proprietary dataset from a popular video streaming service. In a subscription setting when a customer chooses not to renew the service, the outcome is explicitly observed and logged. From a practical perspective, it is obviously preferable and meaningful when a customer’s tenure with the service is n months rather than 1 month. It is also clearly valuable to be able to estimate and project that tenure accurately across different cohorts of customers. In this particular case the cohorts are highly heterogeneous as shown in Figure 7.

Figure 7. Estimated churn probabilities for 3 different cohorts. The large variations in the shape of the fitted distributions motivate the use of a beta prior on the conditional churn probability.

Subscription Retention Model AUC

In this example the data set had more than 10M rows and 500 columns. We trained 20 models on bootstraps of the data with a 10x downsample on censored customers. We used 4 discrete decision points to fit the model. Evaluation was done on a subset of 3M rows which was held out from the models, and held out in time as well over an additional 5 decision points (9 total months of data). All models are implemented as GBRTs and estimated using lightGBM. In Figure 8, we show the evaluation of the models across two cohorts: one with relatively little data and covariates to describe the customers (which should clearly benefit from modeling the unobserved heterogeneity) and one with much richer data and covariates. Surprisingly even on the rich data set where one might argue there should be considerable homogeneity within any given subset of customers, we still find accuracy improvements by using the beta-logistic over the standard logistic model. This example illustrates how regardless of how many covariates are considered, there is still considerable heterogeneity.

Figure 8. Held-out AUC for two different cohorts of customers.

AUC of the Subscription Retention Model.

4.3.2. Retention within shows

Another problem of importance to a video subscription business is ranking shows that customers are most likely to fully enjoy (i.e. customers watch the shows to completion across all the episodes). Here we model the distribution of survival of watched episodes of a show conditional on the customer having started the show. In Figure 9 we compare the performance of the beta-logistic to logistic models at an early (1 episode) horizon and a late horizon (8 episodes). The dataset contains 2k shows and spans 3M rows and 500 columns. We used a 50/50 train/test split. Model training used bootstrap methods to provide uncertainty estimates. All models are implemented as GBRTs which were estimated in lightGBM. In early horizons, the beta-logistic model once again provides significant AUC improvements over logistic models trained at either the 1 episode horizon and 8 episode horizon.

Figure 9. The beta-logistic improves ranking accuracy (in terms of AUC) for early horizons.

5. Conclusion

We noted that heterogeneity in the beta-logistic model can better capture the temporal effects of survival and ranking at multiple horizons. We extended the beta-logistic and its maximum likelihood estimation to linear, tree and neural models as well as characterized the convexity and properties of the learning problem. The resulting survival models give survival estimates and provably consistent rankings of who is most-at-risk at multiple time horizons. Empirical results demonstrate that the beta-logistic is an effective model in discrete time to event problems, and improves over common baselines. It seems that in practice regardless of how many attributes are considered there are still unobserved variations between individuals that influence the time to event. Further we demonstrated that we can recover posteriors effectively even when the data is very heterogeneous, and due to the speed and ease of implementation we argue that the beta-logistic is a baseline that should be considered in time to event problems in practice.

In future work, we plan to study the potential use of the beta-logistic in explore-exploit scenarios and as a viable option in reinforcement learning to model long-term consequences from near-term decisions and observations.

Acknowledgements.
The authors would like to thank Nikos Vlassis for his helpful comments during the development of this work, and Harald Steck for his thoughtful review and guidance.

References

  • (1)
  • Abadi et al. ([n. d.]) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. [n. d.]. Tensorflow: a system for large-scale machine learning.
  • Ben-Porath (1973) Yoram Ben-Porath. 1973. Labor-force participation rates and the supply of labor. Journal of Political economy 81, 3 (1973), 697–704.
  • Berger (2013) James O Berger. 2013. Statistical decision theory and Bayesian analysis. Springer Science & Business Media.
  • Chang et al. (2012) Allison Chang, Cynthia Rudin, Michael Cavaretta, Robert Thomas, and Gloria Chou. 2012. How to Reverse-Engineer Quality Rankings. Machine Learning 88 (September 2012), 369–398. Issue 3.
  • Chapelle (2014) Olivier Chapelle. 2014. Modeling delayed feedback in display advertising. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 1097–1105.
  • Chen and Guestrin (2016) Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. ACM, 785–794.
  • Cox (1972) David R Cox. 1972. Regression models and life-tables. Journal of the Royal Statistical Society: Series B (Methodological) 34, 2 (1972), 187–202.
  • Fader and Hardie (2007) Peter S Fader and Bruce GS Hardie. 2007. How to project customer retention. Journal of Interactive Marketing 21, 1 (2007), 76–90.
  • Fader et al. (2018) Peter S Fader, Bruce GS Hardie, Yuzhou Liu, Joseph Davin, and Thomas Steenburgh. 2018. ”How to Project Customer Retention” Revisited: The Role of Duration Dependence. Journal of Interactive Marketing 43 (2018), 1–16.
  • Fan (1991) Da-Yin Fan. 1991. The distribution of the product of independent beta variables. Communications in Statistics-Theory and Methods 20, 12 (1991), 4043–4052.
  • Gelman et al. (2013) Andrew Gelman, Hal S Stern, John B Carlin, David B Dunson, Aki Vehtari, and Donald B Rubin. 2013. Bayesian data analysis. Chapman and Hall/CRC.
  • Gupta et al. (2006) Sunil Gupta, Dominique Hanssens, Bruce Hardie, William Kahn, V. Kumar, Nathaniel Lin, Nalini Ravishanker, and S. Sriram. 2006. Modeling customer lifetime value. Journal of Service Research 9, 2 (2006).
  • Heckman and Willis (1977) James J Heckman and Robert J Willis. 1977. A Beta-logistic Model for the Analysis of Sequential Labor Force Participation by Married Women. Journal of Political Economy 85, 1 (1977), 27–58.
  • Kalbfleisch and Prentice (2011) John D Kalbfleisch and Ross L Prentice. 2011. The statistical analysis of failure time data. Vol. 360. John Wiley & Sons.
  • Ke et al. (2017) Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017.

    Lightgbm: A highly efficient gradient boosting decision tree. In

    Advances in Neural Information Processing Systems. 3146–3154.
  • Li et al. (2012) Lihong Li, Wei Chu, John Langford, Taesup Moon, and Xuanhui Wang. 2012. An unbiased offline evaluation of contextual bandit algorithms with generalized linear models. In Proceedings of the Workshop on On-line Trading of Exploration and Exploitation 2. 19–36.
  • MacKay (2003) David JC MacKay. 2003. Information theory, inference and learning algorithms. Cambridge university press.
  • Miller (2015) Evan Miller. 2015. Bayesian AB Testing. http://www.evanmiller.org/bayesian-ab-testing.html#cite1
  • Rostykus and Raimond (2018) Benoît Rostykus and Yves Raimond. 2018. vectorflow: a minimalist neural-network library. SysML (2018).
  • Rudin et al. (2012) Cynthia Rudin, David Waltz, Roger N. Anderson, Albert Boulanger, Ansaf Salleb-Aouissi, Maggie Chow, Haimonti Dutta, Philip Gross, Bert Huang, Steve Ierome, Delfina Isaac, Arthur Kressner, Rebecca J. Passonneau, Axinia Radeva, and Leon Wu. 2012. Machine Learning for the New York City Power Grid. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 2 (February 2012), 328–345.
  • Savage (1994) Richard P. Savage. 1994. The Paradox of Nontransitive Dice. The American Mathematical Monthly 101, 5 (1994).
  • W. Vaupel and Yashin (1985) James W. Vaupel and Anatoliy Yashin. 1985. Heterogeneity’s Ruses: Some Surprising Effects of Selection on Population Dynamics. The American statistician 39 (09 1985), 176–85. https://doi.org/10.1080/00031305.1985.10479424

Appendix A Beta Logistic Formulas

a.1. Recurrence derivation

This derivation is taken from Fader and Hardie (Fader and Hardie, 2007) where they use it as a cohort model (also called the shifted beta geometric model) that is not conditional on a covariate vector .

We do not observe , but its expectation given the beta prior (also called marginal likelihood) is given by:

We can write the above as:

Using the property leads to equations (3) and (4), and at we have

a.2. Gradients

Note that for machine learning libraries that do not offer symbolic computation and auto-differentiation, taking the of equations (3) and (4) and differentiating leads to the following recurrence formulas for the gradient of the loss function on a given data point with respect to the output parameters and of the model considered:

These derivatives expand as follows:

We can get a similar recursion for the survival function:

a.3. Diagonal of the Hessian

We obtain the second derivatives for the Hessian as follows:

The survival counterparts to the above terms are also readily computed as follows:

Appendix B Alternative Derivation

Another intuitive derivation of the single-step beta-logistic is obtained by starting from the likelihood for a logistic model and modeling the probabilities with a beta distribution:

This is exactly the survival likelihood for a 1 step beta logistic model.

Appendix C Reproducibility

We include simple python implementations of the gradient callbacks that can be passed to XgBoost or lightGBM. Note that efficient implementations of these callbacks in C++ are possible and yield orders of magnitude speedups.

def grad_BL(alpha,beta,t,is_censored):
    ””
␣␣␣␣Thisfunctioncomputesthegradientofthebetalogisticobjective.
␣␣␣␣Sinceitisvectorizedinpracticeforperformancereasons,herewewrite
␣␣␣␣thenon-vectorizedversionforreadability:
␣␣␣␣””
    N = len(alpha)
    g = np.zeros((N, 2))
    for j in range(0,N):
        if (not is_censored[j]):
            #failed
            g[j,0] = beta[j]/(alpha[j] + beta[j])
            g[j,1] = -g[j,0]
            for i in range(2,int(t[j] + 1)):
                g[j,0] += -(alpha[j]/(alpha[j] + beta[j] + i - 1))
                g[j,1] += beta[j]/(beta[j] + i - 2) - beta[j]/(alpha[j] + beta[j] + i - 1)
        else:
            #survived
            g[j,:] = -alpha[j]/(beta[j] + alpha[j])
            g[j,1] = -g[j,0]
            for i in range(2,int(t[j] + 1)):
                g[j,0] += -(alpha[j]/(alpha[j] + beta[j] + i - 1))
                g[j,1] += beta[j]/(beta[j] + i - 1) - beta[j]/(alpha[j] + beta[j] + i - 1)
    return g
def hess_BL(alpha,beta,t,is_censored):
    ””
␣␣␣␣ThisfunctioncomputesthediagonaloftheHessianofthebetalogisticobjective.
␣␣␣␣””
    N = len(alpha)
    h = np.zeros((N,2))
    for j in range(0,N):
        h[j:] = -alpha[j]*beta[j]/((alpha[j] + beta[j])**2)
        if (not is_censored[j]):
            #failed
            for i in range(2,int(t[j] + 1)):
                h[j,0] += -alpha[j]*((beta[j] + i - 1)/(alpha[j] + beta[j] + i - 1)**2)
                d = (beta[j] + i - 2)**2)*(alpha[j] + beta[j] + i - 1)**2)
                h[j,1] += beta[j]*((alpha[j]+1)*(beta[j]**2-(i-2)*(alpha[j]+i-1)/d
        else:
            #survived
            for i in range(2,int(t[j] + 1)):
                h[j,0] += -alpha[j]*((beta[j] + i - 1)/(alpha[j] + beta[j] + i - 1)**2)
                d = (beta[j] + i - 2)**2)*(alpha[j] + beta[j] + i - 1)**2)
                h[j,1] += beta[j]*((alpha[j])*(beta[j]**2-(i-1)*(alpha[j]+i-1)/d
    h.shape = (N*2)
    return h
def likelihood_BL(alpha,beta,t,is_censored):
    ””
␣␣␣␣Thisfunctioncomputesbetalogisticobjective(likelihood:higher=better)
␣␣␣␣Sinceitisheavilyvectorizedinpracticeforperformancereasons,wewrite
␣␣␣␣herethenon-vectorizedversionforreadability:
␣␣␣␣””
    p = alpha / (alpha + beta)
    s = 1 - p
    for j in range(0,len(alpha)):
        for i in range(2,int(t[j]+1)):
            p[j] = p[j] * (beta[j] + i - 2)/(alpha[j] + beta[j] + i - 1)
            s[j] = s[j] - p[j]
    return p * (1.0 - is_censored) + s * is_censored