A budget-constrained inverse classification framework for smooth classifiers

05/29/2016 ∙ by Michael T. Lash, et al. ∙ The University of Iowa 0

Inverse classification is the process of manipulating an instance such that it is more likely to conform to a specific class. Past methods that address such a problem have shortcomings. Greedy methods make changes that are overly radical, often relying on data that is strictly discrete. Other methods rely on certain data points, the presence of which cannot be guaranteed. In this paper we propose a general framework and method that overcomes these and other limitations. The formulation of our method can use any differentiable classification function. We demonstrate the method by using logistic regression and Gaussian kernel SVMs. We constrain the inverse classification to occur on features that can actually be changed, each of which incurs an individual cost. We further subject such changes to fall within a certain level of cumulative change (budget). Our framework can also accommodate the estimation of (indirectly changeable) features whose values change as a consequence of actions taken. Furthermore, we propose two methods for specifying feature-value ranges that result in different algorithmic behavior. We apply our method, and a proposed sensitivity analysis-based benchmark method, to two freely available datasets: Student Performance from the UCI Machine Learning Repository and a real world cardiovascular disease dataset. The results obtained demonstrate the validity and benefits of our framework and method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In many predictive modeling problems, we are concerned less with the actual prediction, and more with how an individual prediction might be changed. Classification problems such as loan screening and college admission have one output class that is clearly “desired” by a test case. A person turned down for a loan would naturally wonder why the decision was made, and more importantly, what they could do to change the outcome on the next attempt. We use the term inverse classification

to refer to the process of finding an optimal set of changes to a test point so as to maximize its predicted probability of the desired class label.

Problems such as this are prevalent in personalized medicine settings. Consider, for example, lifestyle choices that minimize Patient 15’s long-term risk of cardiovascular disease (CVD) – a randomly selected patient from our experiments in Section IV. An initial risk prediction, estimated to be 32%, is obtained using a trained, nonlinear classifier, based on Patient 15’s EHR data. With Patient 15’s initial risk now known, we wish to work “backwards” through the classifier to obtain recommendations that minimize the probability of CVD. We approach the recommendation step by defining an optimization problem: what is the smallest (or easiest) set of feasible changes that this person can make in order to minimize the predicted probability of developing CVD?

Our first contribution in this work is to define an inverse classification framework that produces realistic recommendations. We do so by first partitioning features into two categories: unchangeable and changeable. It would be impossible for Patient 15 to reduce her age – this is an unchangeable feature. Changeable features are further partitioned into directly and indirectly changeable categories. Directly changeable features are immediately actionable – we can recommend that Patient 15 adjust her diet, for example. Indirectly changeable features change as a consequence of manipulations to the directly changeable features, but are themselves not actionable. Blood glucose changes as Patient 15’s diet is altered, but cannot be directly altered itself.

In our framework, directly changeable features incur individual, attribute-wise cost. Cumulative costs across such features are constrained to be within a budgetary level. These costs and budget can be specified by either a domain expert, the individual (e.g., Patient 15), or some combination of the two.

The second contribution of this work is a method that solves the inverse classification problem within the specified framework. Our method uses the gradient information of classifiers to provide recommendations that minimize the probability of an undesirable class. Using such a method within the specified framework we are able to provide recommendations that reduce Patient 15’s probability of CVD from 32% to 3%.

The third contribution we identify is to specify two bound-setting methods, Elastic and Hard-line, that operate within the outlined framework allowing inverse classification to occur more freely or more rigidly, depending upon the problem. Lastly, we incorporate an indirect feature estimator, that adjusts features that change as a consequence of the directly alterable set of features.

In the remainder of the paper we discuss past work (Section II), our proposed framework and new method of inverse classification (Section III), our 16 experiments, conducted on two freely available datasets using our method and a sensitivity analysis-based benchmark method (Section IV), and the conclusions we make following these experiments (Section V).

Ii Related Work

Inverse classification can be seen as a form of sensitivity analysis, the process of examining the input features’ effects on the target output. While there are many forms of sensitivity analysis

[1, 2], inverse classification is most similar to local sensitivity analysis and variable perturbation method. Later on (Section III), we propose a benchmark method that is based on these.

Past works on inverse classification can be looked at from three perspectives: the manner in which the algorithm operates, the type of data the algorithm operates on, and the framework that guides the process of obtaining recommendations. Algorithm operation, which represents the optimization method employed, can be broken down into two groups: greedy [3, 4, 5, 6] and nongreedy [7, 8]. Greedy methods tend to focus on extreme objectives, which may not be realistic in the real world, while nongreedy methods tend to focus on more moderate objectives. This work uses the latter.

Algorithmic data types, which refers to the type of data a particular optimization algorithm has the capability of operating on, also fall into two categories: discrete [3, 4, 5] and continuous [6, 7, 8]. Discrete data types lead to coarse-grained recommendations, while continuous data types provide those that are more fine-grained. In this work, we focus on the latter, as precision recommendations are the goal.

Framework refers to the constraints that govern recommendation feasibility. These are manifested in the literature as either unconstrained [3, 4, 5] or constrained [7, 8, 6]. Unconstrained problems lead to unrealistic recommendations that may also be very extreme (e.g., ‘reduce your age by 30 years’). Constrained frameworks lead to more moderate and realistic recommendations. However, while [7, 8] focus on moderate objectives, they do not consider (1) what can/cannot be changed, (2) how hard it might be to change and (3), cumulatively, how willing someone may be to make changes. In [6] the authors consider (2), but do not consider (1) and (3). Additionally, in [7], the formulation of border classification

relies on data points which lie exactly on the separating hyperplane; there is not guarantee that such points exist in practice. In this work we propose a framework that considers (1), (2) and (3).

Inverse classification is a utility-based data mining topic and is thereby related to the subtopics of strategic [9] and adversarial [10] learning. In these topics it is assumed that a strategic agent may attempt to game a learned classifier in order to conform to a desired class. Classifiers are then constructed taking such behavior into account. Such considerations do not need to be made in an inverse classification setting, however, as the goal is to provide explicit instructions to an intelligent agent (e.g., person) on how they can conform to a desired class, thereby making such accounts both unnecessary and undesirable.

Iii An Inverse Classification Framework and Method

In this section we propose a new inverse classification framework, and a method that can be used within the framework to solve the problem. We begin by generally discussing the problem and introducing some notation.

Suppose is a dataset of , assumed to have been drawn i.i.d. from some population distribution , where

is a column feature vector of length

and is the binary label associated with for . Let denote the matrix of training instances with ’s being its rows. Any number of classification models can be trained with this dataset and used to predict the class of new instances. Unlike typical classification settings, however, given a new instance , our goal is not only to classify it to the positive or the negative class but also to recommend an update on that minimizes the probability of being classified as positive. We assume one unit change in each feature of will incur a cost and that only a limited amount of budget is available. We propose a numerical framework and algorithm that recommends an optimal change on based on a classification model that incorporates this budgetary constraint.

Iii-a Framework

Suppose we are allowed to change some of the features of instance to obtain a new version . Also suppose we want this change to minimize the probability of being classified as positive. With a classifier , such an can be obtained by minimizing over the features of the new version .

However, for some physical or economical reasons, we cannot search for the optimal over the whole feature space . In particular, we assume the features can be partitioned into two subsets and . Given a feature vector , let and represent the sub-vectors of that contain only changeable and only unchangeable features, respectively. Since cannot be changed, we will minimize by optimizing . Hence, we represent as to distinguish these two sub-vectors. In addition, we assume the reasonable value of each changeable feature in must be within an interval, denoted by for . Moreover, the costs for increasing and decreasing any feature by one unit are denoted by and , respectively. Give a limited budget , the optimal feature design problem for a given instance can be formulated as follows:

(1)
s.t.

where and .

In a more general setting, some of the features in can be changed directly by the designer. We call these features the directly changeable features. However, there are features that cannot be changed directly. Instead, they change as a consequence of manipulations made to the directly changeable features. We call these indirectly changeable features. In Chi et al. [4] the effects of the directly changeable on the indirectly changeable features are measured upon completion of the inverse classification process. Our method incorporates them as part of the optimization.

To model this phenomenon, we further partition the features in into two subsets, and , which represent the sets of directly and indirectly changeable features, respectively. When we optimize the features, we can only determine the value for and the values of will depend on and . Therefore, we model the dependency of on and as where the mapping is assumed to be smooth and differentiable. Note that the mapping can be trained using the same training instances for . Furthermore, while the estimates elicited from may be noisy, using is better than allowing the values to remain static by definition of what represents. Therefore, we represent as to distinguish these three blocks so that the feature optimization problem (1) can be generalized to

(2)
s.t.

We relate a specific method for solving in Section IV.A.3. We note that, in practice, is likely to be small and that, while may be large (e.g., pictorial or text-based features), the efficiency of the optimization won’t be affected.

Iii-A1 Time Complexity of

We acknowledge that the size of the indirectly changeable feature set may be large and, as a result, wish to examine the time complexity associated with the indirect feature estimator , which may prove to be a computational bottleneck.

Let denote the indirect feature estimator for feature and let denote the corresponding time complexity associated with using ; that is, is . We can then write the time complexity of as

(3)

where is the time complexity of . As we can see, increases linearly with the size of (this is by virtue of the fact that we can estimate each feature in independently). However, depending on the choice of , and the size of , this may still prove to be a bottleneck. If this is the case, the user may need to tailor their selection of , or forgo estimating certain features during the inverse classification process. We empirically show that the time complexity scales linearly using the defined in the experiments section (kernel regression), and include the result in the supplementary material that can be found at the publicly accessible repository github.com/michael-lash/BCIC.

Iii-A2 Hard-line and elastic bound-setting methods

The constraints in (1) and (2) are flexible enough to model different feature perturbation requirements. Specifically, there are two ways that the lower and upper bounds can be parameterized, each resulting in different algorithmic behavior.

The first is rigid with respect to test point ’s original directly changeable values: if then , and if then where . Such box constraint parameterization prevents feature from being increased without cost if , or from being decreased without cost if , even if doing so would be beneficial according to the local function space, determined by . This allows for more control over the recommendations being made to individuals and is most appropriate when domain experts can interject their own knowledge in designating which directions of change are most beneficial. We refer to this as the Hardline bound-setting method.

The second is less rigid, allowing feature to increase even if , or to decrease even if . To obtain such behavior, if then and if then . We refer to this as the Elastic bound-setting method.

In practice, we acknowledge any combination of these bound-setting methods can be used in a feature-specific manner. Bounds and costs can also be imposed such that individual costs are incurred differently, depending on whether a specific feature is increased or decreased.

Iii-B Optimization Method

To solve the inverse classification problem, according to (1) and (2), we assume that objective function is differentiable and its gradient is Lipschitz continuous. Under this assumption, if is linear, the problem can be solved optimally and efficiently. If, however, the objective function is highly non-linear and non-convex, finding the globally optimal solution is NP-hard, in general. Because we do no wish to make further assumptions about the linearity of , we focus on methods that can solve both these and the harder non-linear, non-convex class of function.

The available techniques that can be applied to non-convex, constrained optimization problems (see [11] and extensive references therein) include: (a) deterministic approaches such as branch and bound [11], function approximation [12], cutting plane methods [13], difference of convex functions methods [14]

; and (b) stochastic approaches such as genetic algorithms 

[15]. However, these methods are typically slow and do not scale to large problems111This fact is observed first-hand in conducting our own experiments; such an experience will be further elaborated on in Section IV..

Therefore, our list of potential methods is left to include the projected/proximal gradient method [16, 17] and the zero-order method [17]. If is second-order differentiable, the list of potential methods can be extended to include regularized Newton’s method, sequential quadratic programming and BFGS. Among these methods, the projected gradient method and the zero-order method can guarantee that the iterative solution converge to a stationary point at a rate of . The remaining methods only guarantee asymptotic convergence, with no specified convergence rate. Since the zero-order method is appropriate only when evaluating the gradient of is difficult, which is not our case, the appropriate method to apply with good theoretical guarantees is the projected gradient method.

Iii-B1 The Projected Gradient Method

Before we present the projected gradient method, we need to reformulate (1) or (2) using the difference of the original features and updated features as our decision variables. Because space is limited, we will only conduct the reformulation and presentation of the algorithm for (2), but the same technique can be applied to (1). In (2), we define and, by changing variables, (2) can be equivalently written as

(4)

where ,

(5)

and for . The projection mapping onto the set is defined as

(6)

When is differentiable and its gradient is -Lipschitz continuous,222 is -Lipschitz continuous if for any . which is true for our class of function, the projected gradient method for solving (5) is then given as Algorithm 1.

0:  , and
1:  while Stopping criterion is not satisfied do
2:     
3:     
4:  end while
4:  
Algorithm 1 Projected Gradient Method

According to Theorem 3 of [16], when , Algorithm 1 guarantees that converges to a stationary point (or so-called KKT point) of (4) at a rate of , which is the best convergence for non-convex smooth optimization.

Algorithm 1 requires solving the projection at each iteration, which is itself an optimization problem. An efficient solution scheme for this subproblem is critical for making Algorithm 1 expeditious. Fortunately, the domain has a specific structure which allow us to solve for any with an efficient subroutine. To see this, we define

(7)

for each . The subroutine is given in Algorithm 2.

0:  , , , and
1:  
2:  
3:   for and for
4:  if  then
5:     
6:  else
7:     Apply bisection search to find such that
8:  end if
9:   for
9:  
Algorithm 2 Projection Mapping

The correctness of Algorithm 2 is ensured by the following proposition whose proof is given in the Appendix.

Proposition 1.

If , the solution returned by Algorithm 2 satisfies .

Iii-C Representativeness and Support

With our methodology defined, we wish to comment on, and subsequently quantify, both the representativeness of the training set from which our will generalize and the support underlying the inverse classification of an instance. Therefore, we first propose -dissimilarity, related by Definition 1, which quantifies the dissimilarity between the training set distribution and population distribution using a linear discrepancy distance measure defined in Johansson et al. [18].

Definition 1.

The distribution of the training set, drawn from the population distribution , is said to be -dissimilar to that of if

(8)

where is the discrepancy distance between two samples [18], or in this case the training sample and population, we define to denote the mean of a particular distribution, and is the Euclidean norm.

Using Definition 1, we relate the following proposition.

Proposition 2.

As the size of the training set increases to infinity, the training set distribution is asymptotically -dissimilar to that of population distribution .

The proof of Proposition 2

is in the appendix. We wish to point out, however, that the variance and shape of

and may be quite different despite being -dissimilar to that of . Additionally, in practice, the i.i.d. assumption may not hold (in this work we assume it does). We leave methods, taking into account such factors, as tangential future work.

We are also concerned with ensuring that optimized instances be near training data. These underlying training data provide support as to the “trustworthyness” of the recommendations and corresponding probabilities elicited from the inverse classification process. Therefore, we define -support, related by Definition 2, which empirically quantifies the degree to which an inversely classified instance can be trusted.

Definition 2.

Define the -support for a particular test instance , to be the following:

  • is the variance in the predicted probabilities of ’s nearest neighbors (from the training data). This measure provides an assessment as to the stability of the local probability space surrounding .

  • is the number of neighbors that fall within of , where the function returns the maximum distance of training instance ’s nearest neighbors; represents the average of these maximum distances. By comparing the of to the average of the training set we can observe whether a particular test instance has more (larger ) or less (smaller ) “support” (relative to the average from the training data) underlying the predicted probability.

We explore -support in the Experiments section.

Fig. 1: Experiment process.

Iv Experiments

In this section we outline our experimental methods and then apply such methods to two datasets. The first is a benchmark dataset from the UCI Machine Learning Repository [19] called Student Performance [20]. The second is derived from ARIC, the Atherosclerosis Risk in Communities study [21]. We emphasize that both datasets are publicly available. The latter requires explicit NIH permission333Obtained via BioLINCC.. We provide the code used in all experiments, and processed Student Performance data for public use at github.com/michael-lash/BCIC. The list of unchangeable, indirectly changeable, and directly changeable features (and corresponding parameters) for both datasets is also provided at the above mentioned URL.

We emphasize that parameterization of the inverse classification framework, including the costs-to-change and assignment of features to the categories of unchangeable, indirectly changeable and directly changeable, should be guided by domain experts. As such, our experiments on the ARIC dataset are guided by a CVD specialist who is a co-author of this work.

Iv-a Experiment Parameters and Setup

In this section we outline a general process of validating inverse classification methods, the two learning algorithms used to conduct the inverse classification, a method for estimating indirectly changeable features, and a benchmark optimization method which we will compare against our gradient-based method.

Iv-A1 Process

Our process of making and evaluating recommendations is based on that proposed by [4]. In our experiments, we are using data from the past in which known outcomes are observed. We then make recommendations that reduce the probability of a negative outcome occurring. But, in the absence of a time machine, we need a way of validating whether we would have actually reduced the probability of such an event occurring. A method that accomplishes this requires careful segmentation of the data such that none of the information used to make recommendations is used in validating the probability of an outcome occurring. The process, shown in Figure 1, is related as follows:

Step 1:

Partition the full dataset into two equal parts: a training set and a testing set. Data cleansing and preparation are also performed, including missing value imputation (mean) and the normalization of data values to be within

.

Step 2(a): uses the training set to learn a model . During this step cross-validation can be used to find the optimal parameters of , if necessary. We also perform cross-validation to obtain optimal parameters in the model for indirectly changeable features.

Step 2(b): Further split the testing set into 10ths. 1/10th is for performing inverse classification on and the other 9/10ths are used for validation.

Step 3(a): Perform inverse classification on the heldout 10th of data using .

Step 3(b): Learn a validation (and ) using the 9/10ths of heldout testing data.

Step 4: Estimate probabilities for the optimized inverse classification instances using . These are the probabilities we report in our experiments. Note that we obtain probabilities for each 1/10th of held out testing data.

By setting up the experiment in this manner we are also able to be more confident that the recommendations obtained are not the result of overfitting. Note also that by switching the roles of training and validation/test sets, the full amount of data can be used to obtain results.

Iv-A2 Classification Functions

Our experiments employ the use of two different learning methods: the linear logistic regression model and the nonlinear kernel SVM.

Logistic regression is a popular predictive model that works particularly well when the linear feature independence assumption holds. The model is trained via maximum likelihood estimation, given by the optimization problem

(9)

where and are a vector of coefficients and offset term, respectively. After being trained the and can be used to make classifications for a given test instance by

(10)

which gives the probability of being in the positive class.

Employment of the logistic model in our described inverse classification framework can be viewed as a basic method having roots in sensitivity analysis. This is illustrated by observing the link between coefficient examination as a means of sensitivity analysis and the employment of our described gradient-based methodology. Examining the sign and magnitude of a coefficient uncovers a particular feature’s bearing – how positive or how negative – on the problem being modeled. Taking the gradient of a linear model has the same effect, thus informing the inverse classification framework which feature perturbations decrease the objective function value, with larger coefficients having a larger effect. Integration of this optimization methodology into the framework allows cost, budget, etc. to be taken into account as well.

Among classification models, the kernel SVM is one of the most widely used. Compared to the classical linear SVM, kernel SVM is more appropriate for data in which two classes of instances have a nonlinear boundary. A kernel SVM model can be trained using its dual formulation which is related by the optimization problem

(11)
s.t.

where is a kernel function that measures the similarity between any pair of instances and in . The commonly used kernel functions include linear kernels , polynomial kernels for any positive integer , and Gaussian kernels for where represents the Euclidean norm in .

Suppose the optimal solution of (11) is . An SVM classifier can be derived based on the function444In fact, the exact kernel SVM classifier is where is an offset value such that the new instance is classified to be positive if and to be negative otherwise.

(12)

where the instance with is called a support vector. Given a new instance , the value of represents how similar is to the positive class. A larger value of means that is more likely to be positive.

However, the scores obtained from do not correspond to likelihood directly. Therefore, we apply Platt’s Method [22]. Platt’s Method transforms the scores obtained from applying to probabilities; specifically, the probability of being positive. By applying this method we learn a probability space that is more easily interpretable.

We elect to use the Gaussian kernel SVM for three reasons. The first is that such a function is highly nonlinear and complex, giving us the opportunity to explore a more flexible classifier by which we can assess the effectiveness of our method. Secondly, the Gaussian kernel can be used to assess point similarity. This is beneficial in our experiments as one of our assumptions is that similar points will have similar probabilities associated with them, which isn’t enforced by linear predictors. Finally, using the parameter, we can control the size of the neighborhood used to assess point similarity. That is, larger values make more distant support vectors appear more similar to a test point , which subsequently has the effect of smoother probability transitions during optimization.

Therefore, our objective function, outlined in (1) and (2), becomes (10) and (12), logistic and SVM, respectively, with features segmented into appropriate groups and the indirect feature estimator, outlined in the next subsection, incorporated. We explicitly note that, in the case of (12), the minimization task is to minimize the SVM score. More appropriately, by applying Platt’s method, we will be minimizing probability directly, as we are when using (10).

Iv-A3 Estimating Indirectly Changeable Features

We employ the use of Kernel Regression [23, 24] as a means of estimating the indirectly changeable features. In particular, the model used in (2) is

(13)

where the kernel (Gaussian) and the value is selected based on cross-validation. By using the model in (13) with the Gaussian kernel we are provided with the added benefit of a point similarity assessment in making estimations. The model works by considering the known training set , that are closer to , more favorably than those that are further away. In so doing, (13) obtains an estimate for based on points that are most similar to it.

Iv-A4 Methodological Benchmark

In our experiments we wish to compare our method to that of another. However, to the best of our knowledge, there exists no past methods, including those found in Section II, that can be incorporated into our framework. Therefore we develop a method, based on sensitivity analysis, that we believe represents a reasonable initial attempt at solving the problem from such a standpoint. Our proposed benchmark method operates by iteratively perturbing each feature to the bounds of feasibility (and is therefore akin to the variable perturbation method of sensitivity analysis [2]). The objective function is then evaluated. If this value is found to be better than any of the previous single-feature perturbations, the perturbation is accepted. After making single-feature perturbations, if some amount of budget remains, then subsequent rounds of perturbation occur (double-feature perturbation, triple-feature perturbation, etc.).

(a) SP dataset using Hardline Bound-setting.
(b) SP dataset using Elastic Bound-setting.
(c) ARIC dataset using Hardline Bound-setting.
(d) ARIC dataset using Elastic Bound-setting.
Fig. 2: Average probability vs. budget by dataset (Student Performance or ARIC) and by bound-setting method. Solid lines represent a result obtained using the logistic model, while dotted lines represent a result obtained using the SVM model. PGD denotes use of the gradient method, while Sens denotes use of the sensitivity analysis-based method. The cyan dashed line is a randomly selected individual whose recommendations will be shown and discussed in the next subsection.

Here we assert that, because we have chosen two different indirectly changeable feature estimators, we will effectively be using two different benchmark methods.

Cumulatively, our experiments will involve two datasets (ARIC, Student Performance), two classification functions (logistic, SVM), two optimization methods (PGD, sensitvity analysis-based), and two bound-setting methods (Hardline and Elastic) which constitute a total of 16 experiments.

Iv-B Data Description

We validate the effectiveness of our inverse classification framework on two datasets: Student Performance and ARIC. Student Performance data consists of individual Portuguese students enrolled in two different classes. The one used in this experiment was the Portuguese language class, as it contained the greater number of instances (). Each student-instance has 43 associated features (). The dependent variable is whether a student earned a final grade of C or below () or not (). We discard the two intermediary grade reports to reflect the long-term goal of earning a better grade. Therefore, the task is to minimize the probability of earning a C or below.

The ARIC dataset contains patients for which we define 110 features (please refer to github.com/michael-lash/BCIC). As the problem domain is medicine-based, we consulted an epidemiologist, a coauthor of this paper. We define to be a positive CVD diagnosis, which includes probable myocardial infarction (MI), definite MI, suspect MI, definite fatal coronary heart disease (CHD), possible fatal CHD, and stroke. Patients not having any of these diagnoses have their CVD class variable encoded as . Additionally, patients having one of these diagnoses prior to the study period were excluded from our dataset (giving us the final patients).

Iv-C Results: Probability Reduction

The results of our 16 experiments are reported in terms of average probability relative to budget, which can be viewed in Figure 2, where the subfigures stratify results by dataset and bound-setting method.

Comprehensively we can see that, in the general case, all methods except the logistic classifier using PGD on the Student Performance dataset were successful in reducing the average probability of a negative outcome. Depending on the dataset and bound-setting method used, different methods coupled with different classifiers experienced different degrees of success. This seems to suggest that, as in typical classification settings, methodological success varies on a dataset-to-dataset basis.

Interestingly, at a high level, there is no difference between the results obtained using the Hardline and Elastic bound-setting methods on Student Performance and only one distinct difference between the results obtained on ARIC. Here, logistic regressing using the PGD method is observed to have distinctly greater average performance using the Elastic bound-setting method (shown in Figure 1(d)). Such a result should be viewed cautiously, however, as the recommendations obtained may differ, and perhaps even contradict, those our cardiovascular disease specialist would view as being truly beneficial. Differences of this nature may be attributable to possible noise in the ARIC data.

In examining the results obtain on Student Performance, shown in Figures 1(a) and 1(b), some interesting findings emerge 555We wish to point out that the probabilistic estimates obtained from the two classifiers are disparate, which we believe stems from small amounts of training data. We can see that the best result obtained using the logistic classifier was through the sensitivity analysis-based method and the best obtained using the SVM classifier was through PGD. This may suggest that simpler, linear classifiers may experience better inverse classification results using simpler means of optimization and that more complicated, non-linear classifiers may see better results using those that are more complicated.

(a) Student 135.
(b) Patient 15.
Fig. 3: Recommended changes vs. budget for a randomly selected individual from each dataset.

This latter point is somewhat supported by the results obtained on the ARIC dataset, shown in Figures 1(c) and 1(d). In examining Figure 1(c) we can see that PGD outperformed the sensitivity analysis-based method when using the nonlinear SVM classifier and that the sensitivity analysis-based method outperformed PGD when using the linear logistic classifier. However, in Figure 1(d), which represents results obtained using the Elastic bound-setting method PGD has dominated in the case of both classifiers. This result seems to suggest that, regardless of classifier complexity, if there exist optimizations that benefit from an Elastic setting (recall that no benefits were found from such a setting on Student Performance), PGD may dominate (on average).

Unexpectedly, looking at the results obtained for a randomly selected individual from either dataset, we can see that there is no difference in probabilistic improvement between the two bound-setting methods based when using SVM with PGD. The specific recommendations made to these individuals are discussed in the next subsection along with recommendations most commonly made to individuals in each dataset at a budget of four.

(a) Stud. Perf. Average by budget.
(b) Stud. Perf. Average by budget.
(c) ARIC. Average by budget.
(d) ARIC. Average by budget.
(e) Stud. Perf. Average by budget.
(f) Stud. Perf. Average by budget.
(g) ARIC. Average by budget.
(h) ARIC. Average by budget.
Fig. 4: -support for Student Performance and ARIC using both the Hardline (3(a)-3(d)) and Elastic (3(e)-3(h)) bound-setting methods with .

Iv-D Results: Cumulative and Individual Recommendations

In this subsection we briefly relate the most common changes recommended to individuals in each dataset and then discuss the definitive recommendations made to two randomly selected instances.

Table I shows the most common recommendations by raw count, the highest ranking of which pertain to features relevant to nearly all individuals (time with friends and eating food, for instance).

Rank Student Perf. ARIC
1 Time w/ friends Eat dark/grain bread
2 Study time Eat fruit
3 Absences Cigs/day
4 Weekday alco. cons. Eat veggies
TABLE I: Most commonly recommended feature changes by dataset using SVM with the PGD method at a budget of four.

Not all changes could be made to all individuals, however. For instance, not all individuals drink during the weekdays (Student Performance) and not all individuals smoke cigarettes(ARIC). Therefore, red shows that when recommendation commonality is normalized by the number of individuals who were engaging in weekday drinking and smoking, 97.97% and 99.98% of the time alterations to such behaviors were respectively recommended. Such a result shows that while such risky behaviors are not necessarily common among all individuals, those who do engage in them are frequently recommended to make alterations.

Figures 2(a) and 2(b) show the changes recommended to a randomly selected individual from Student Performance and ARIC, respectively, using SVM with the PGD method.

Contrasting Figure 2(a) with Figure 2(b) we can see that, in the case of the former, a single feature was optimized to the extent of feasibility before perturbations were made to another, whereas in the case of the latter, optimization of several features happened in tandem.

In examining the specific recommendations made to Student 135 in Figure 2(a), we can see that first weekday drinking was curbed, followed by a reduction in school absences, weekend alcohol consumption, and time out with friends, as the budget was increased. Last, at the second highest budgetary level, time spent studying was increased. In the aggregate, it seems as though risk-related behavioral mitigations were determined to be optimal for this student.

Looking at the recommendations made to Patient 15 in Figure 2(b) we can see that, at low budgetary levels, an increase in dark or grain breads and a decrease in the number of cigarettes were recommended. Following these, as the budget was further incremented, consumption of more fruits and vegetables, in tandem, was recommended. At a budget of 13 it was also recommended that the patient decrease sodium intake and then subsequently, at a budget of 18, dietary fiber intake was increased. Finally, at a budget of 20, an increase in the consumption of nuts was recommended. Comprehensively, the recommendations deemed optimal for this patient were dietary-based, with the exception of a reduction in the number of cigarettes.

Iv-E Results: -support

The results in Figure 4 show that our inverse classifications are well supported in terms of probability space () and underlying training data () for both Student Performance and ARIC, up to certain budgetary levels (sans SVM/PGD in 3(h)). This suggests that, in future work, a constraint on the underlying -support may be desirable. The results were obtained by taking the average over the values of all optimized test instances for each budgetary level explored in past experiments.

V Conclusions

In this work we propose and validate a new framework and method for inverse classification. The framework ensures that recommendations are realistic by accounting for what can actually be changed, the cost/effort required to make changes, the cumulative effort (budget) an individual is willing to put forth, and the effects that making changes have on features that are not directly actionable. Additionally, we impose bounds on the changeable features that further ensure recommendations are realistic, as well as two bound-setting methods that govern algorithmic recommendation-generating behavior. Furthermore, our methods are very modular, allowing for the use of any differentiable classification function (logistic regression, neural networks, etc.), as well as virtually any estimator of the indirectly changeable features. We demonstrated the efficacy of these methods on two freely available datasets as compared to a baseline method. Future work will focus on augmenting the framework with additional utility, as well as on conducting an in-depth analysis exploring situations in which PGD outperforms sensitivity analysis-based methods.

References

  • [1] S. S. Isukapalli, “Uncertainty analysis of transport-transformation models,” Ph.D. dissertation, Citeseer, 1999.
  • [2] J. Yao, “Sensitivity analysis for data mining,” in Fuzzy Information Processing Society, 2003. NAFIPS 2003. 22nd International Conference of the North American, July 2003, pp. 272–277.
  • [3] C. C. Aggarwal, C. Chen, and J. Han, “The inverse classification problem,” Journal of Computer Science and Technology, vol. 25, no. May, pp. 458–468, 2010.
  • [4] C. L. Chi, W. N. Street, J. G. Robinson, and M. A. Crawford, “Individualized patient-centered lifestyle recommendations: An expert system for communicating patient specific cardiovascular risk information and prioritizing lifestyle options,” Journal of Biomedical Informatics, vol. 45, no. 6, pp. 1164–1174, 2012. [Online]. Available: http://dx.doi.org/10.1016/j.jbi.2012.07.011
  • [5] C. Yang, W. N. Street, and J. G. Robinson, “10-year CVD risk prediction and minimization via inverse classification,” in Proceedings of the 2nd ACM SIGHIT symposium on International health informatics - IHI ’12, 2012, pp. 603–610. [Online]. Available: http://dl.acm.org/citation.cfm?id=2110363.2110430
  • [6] M. V. Mannino and M. V. Koushik, “The cost minimizing inverse classification problem : A genetic algorithm approach,” Decision Support Systems, vol. 29, no. 3, pp. 283–300, 2000.
  • [7]

    D. Barbella, S. Benzaid, J. Christensen, B. Jackson, X. V. Qin, and D. Musicant, “Understanding support vector machine classifications via a recommender system-like approach,” in

    Proceedings of the International Conference on Data Mining, 2009, pp. 305–11.
  • [8] P. C. Pendharkar, “A potential use of data envelopment analysis for the inverse classification problem,” Omega, vol. 30, no. 3, pp. 243–248, 2002.
  • [9] F. Boylu, H. Aytug, and G. J. Koehler, “Induction over strategic agents,” Information Systems Research, vol. 21, no. 1, pp. 170–189, 2010.
  • [10] D. Lowd and C. Meek, “Adversarial learning,” in Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.   ACM, 2005, pp. 641–647.
  • [11] A. Neumaier, “Complete search in continuous global optimization and constraint satisfaction,” Acta Numerica,, vol. 13, pp. 271–369, 2004.
  • [12] D. R. Jones, “A taxonomy of global optimization methods based on response surfaces,” Journal of Global Optimization, vol. 21, no. 4, pp. 345–383, Dec. 2001. [Online]. Available: http://dx.doi.org/10.1023/A:1012771025575
  • [13] H. Tuy, T. V. Thieu, and N. Q. Thai, “A conical algorithm for globally minimizing a concave function over a closed convex set,” Mathematics of Operations Research, vol. 10, pp. 498–514, 1985.
  • [14] H. Tuy, “Global minimization of a difference of two convex functions,” Mathematical Programming Studies, vol. 30, pp. 150–182, 2009.
  • [15] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, 1st ed.   Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1989.
  • [16] Y. Nesterov, “Gradient methods for minimizing composite objective function,” Mathematical Programming, Series B, vol. 140, pp. 125–161, 2013.
  • [17] S. Ghadimi and G. Lan, “Stochastic first- and zeroth-order methods for nonconvex stochastic programming,” SIAM Journal on Optimization, vol. 23, pp. 2341–2368, 2013.
  • [18] F. D. Johansson, U. Shalit, and D. Sontag, “Learning representations for counterfactual inference,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ser. ICML’16.   JMLR.org, 2016, pp. 3020–3029. [Online]. Available: http://dl.acm.org/citation.cfm?id=3045390.3045708
  • [19] A. Asuncion and D. Newman, “UCI Machine Learning Repository,” 2007.
  • [20] P. Cortez and A. M. G. Silva, “Using data mining to predict secondary school student performance,” in Proceedings of 5th Annual Future Business Technology Conference.   EUROSIS, 2008.
  • [21] ARIC Investigators and others, “The atherosclerosis risk in communitities (ARIC) study: design and objectives,” American Journal of Epidemiology, vol. 129, no. 4, pp. 687–702, 1989.
  • [22] J. Platt et al., “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” Advances in Large Margin Classifiers, vol. 10, no. 3, pp. 61–74, 1999.
  • [23] E. A. Nadaraya, “On estimating regression,” Theory of Probability & Its Applications, vol. 9, no. 1, pp. 141–142, 1964.
  • [24]

    G. S. Watson, “Smooth regression analysis,”

    The Indian Journal of Statistics, Series A, vol. 26, no. 4, pp. 359–372, 1964.

Appendix

Proof of Proposition 1

Consider the index . Due to the relationship , any feasible value of can be at most while deviating from increases the objective value of (6) and generates cost at a rate of . Hence, the optimal value for must be for each index . Similarly, the optimal value for must be for this index .

With the optimal value of for determined, the optimization problem (6) is reduced to

(14)

where , , i.e., the sub-vector of containing the features in , and

For any , let for . Using the definition of in (7), we can show that the elements in the set

are all positive only if and the elements in the set

are all negative only if for any , where and represent the subdifferentials of the functions and 666Note that the subdifferential of a non-smooth function at some point can be a set.. This indicates that is the optimal solution of the Lagrangian relaxation problem

with being the Lagrangian multiplier. Step 4 and Step 8 in Algorithm (2) ensure is a feasible solution of (14) and satisfies the complementary slackness conditions with . This implies that is the optimal solution of (14) so that is the optimal solution of (6).∎

Proof of Proposition 2

Assume that the training set is drawn i.i.d. from population distribution , having distribution , where each dimension is in the range , and that the size of the training set

is large, then by the central limit theorem

, as desired. ∎