A Double Penalty Model for Interpretability

09/13/2019 ∙ by Wenjia Wang, et al. ∙ Duke University NC State University 0

Modern statistical learning techniques have often emphasized prediction performance over interpretability, giving rise to "black box" models that may be difficult to understand, and to generalize to other settings. We conceptually divide a prediction model into interpretable and non-interpretable portions, as a means to produce models that are highly interpretable with little loss in performance. Implementation of the model is achieved by considering separability of the interpretable and non-interpretable portions, along with a doubly penalized procedure for model fitting. We specify conditions under which convergence of model estimation can be achieved via cyclic coordinate ascent, and the consistency of model estimation holds. We apply the methods to datasets for microbiome host trait prediction and a diabetes trait, and discuss practical tradeoff diagnostics to select models with high interpretability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Much of machine learning development has focused on prediction accuracy as a primary criterion

(Abbott, 2014). However, recent commentary has emphasized interpretability of models, both to understand underlying relationships and to improve generalizability (Doshi-Velez and Kim, 2017)

. Part of the difficulty in moving forward with an emphasis on interpretability is the lack of guiding theory. In addition, the concept of “interpretability” can be subjective. For example, sparse regression models may be considered intrepretable because there are few coefficients to consider, but non-sparse models (e.g. ridge regression) may also be simple to express and manipulate.

In this paper, we introduce a number of concepts to formalize inherent tradeoffs in model interpretability. A prediction rule is divided into interpretable and uninterpretable portions/functions, as defined by the investigator. The definition and distinction between the functions is arbitrary, and left to the investigator. The key theoretical questions mainly concern consistency and identifiability, which are partly determined by the concept of function separability. Practical considerations include the development of iterative fitting algorithms for the interpretable/uninterpretable portions, which may be valuable even when separability cannot be established. We emphasize that, in our framework, high interpretablity may come at the cost of prediction accuracy, but a modest loss in accuracy may be worth the gain in interpretability. We illustrate the methods using numerical examples and application to a human microbiome prediction problem.

2 A double penalty model and a fitting algorithm

Consider a function of interest , which can be expressed by

where and are two unknown functions with known function classes and , respectively. Following the motivation for this work, we suppose that the function class consists of functions that are “easy to interpret,” for example, linear functions. We further suppose that

is judged to be uninterpretable, for example, the output from a random forest procedure. Suppose we observe data

, with , where and

’s are i.i.d. random error with mean zero and finite variance. The goal of this work is to specify or estimate

and .

Obviously, it is not necessary that and are unique, or can be statistically identified. Regardless of the identifiablity of and , we propose the following double penalty model for fitting,

(1)

where and are convex penalty functions on and , and and are estimators of and , respectively. Under some circumstances, if and can be statistically identified, by using appropriate penalty functions and , we can obtain consistent estimators and of and , respectively. Even if and are nonidentifiable, by using the two penalty functions and , the relative contributions of the interpretable and non-interpretable to a final prediction rule can be controlled.

Directly solving (1) may be difficult, because and may be partly confounded. Here we describe an iterative algorithm to solve the optimization problem in (1).

         Input: Data , , function classes and , and functions and .
         Set . Let
         While Stopping criteria are not satisfied do
               Solve
(2)
(3)
    Set .
          return and .
Algorithm 1 Iterative algorithm

In Algorithm 1, for each iteration, two separated optimization problems (3) and (2) are solved with respect to and , respectively. The idea of Algorithm 1 is similar to the coordinate descent method, which minimizes the objective function with respect to each coordinate direction at a time. The minimization in equations (2) and (3) ensures that the function

decreases as increases. One can stop Algorithm 1 after reaching a fixed number of iterations or no further improvement of function values can be made.

3 Separable function classes

Suppose and can be statistically specified. It can be seen that , for otherwise and would be another decomposition of for any . Furthermore, we would expect that and are non-overlapping, in the sense defined here. We define and as -separable if there exists such that for any functions and ,

(4)

where denotes the norm of a function , and denotes the inner product of functions . In fact, if two function classes are -separable, then and are unique, as stated in the following lemma.

Lemma 1

Suppose (4) is true for any functions and , then and are unique, up to a difference on a measure zero set.

Lemma 1 implies that if two function classes are -separable, then and are identifiable.

We start with convergence analysis of Algorithm 1 applying to separable function classes, and then provide some examples of separable function classes with theoretical properties.

3.1 Convergence of Algorithm 1

The convergence results depend on the separability with respect to the empirical norm. Define the empirical inner product as

for two functions and , and as the empirical norm of function . Analogous to the earlier definition, we say and are separable with respect to the empirical norm, if there exists such that for any functions and ,

(5)

Now we are ready to present the convergence results, under the condition that and are separable with respect to the empirical norm.

Theorem 1

Suppose (5) is true for any functions and . Let and be defined as in (3) and (2), respectively. We have

and , as goes to infinity. Here we use the convention .

It can be seen that in Theorem 1, if and are separable with respect to the empirical norm, Algorithm 1 achieves a linear convergence. The parameter determines the convergence speed.

Since the empirical norm is close to norm as the sample size increases (van de Geer, 2000), it can be expected that if and are -separable, then and are separable with respect to the empirical norm. Furthermore, it can be shown that is close to in (4). These claims are verified in the following lemma.

Lemma 2

Let and be -separable satisfying (4). Suppose

’s are uniformly distributed on

. For , , any , and any functions and satisfying

, with probability at least

, , where and ’s are constants only depending on , and .

By Theorem 1 and Lemma 2, we have the following corollary.

Corollary 1

Suppose and are -separable satisfying (4), and ’s are uniformly distributed on . Let and be as in Theorem 1. For , , and any , with probability at least , we have either , or

and , as goes to infinity, where and ’s are constants only depending on , and .

Corollary 1 shows that if in (4) is small and sample size is relatively large, then Algorithm 1 converges fast. In particular, if for any functions and , , a few iterations of Algorithm 1 are sufficient to get a good numerical solution to (1).

3.2 Finite dimensional function classes

We start with the easiest case. Suppose two function classes and have finite dimensions. To be specific, suppose

where ’s are known functions defined on a compact set , and and are known constants. Furthermore, assume and are -separable. Since the dimension of each function class is finite, we can use the least squares method to estimate and , i.e.,

(6)

By applying standard arguments in the theory of Vapnik-Chervonenkis subgraph class (van de Geer, 2000), the consistency of and holds. We do not present detailed discussion for the conciseness of this paper.

Although the exact solution to the optimization problem in (6) is available, we can still use Algorithm 1 to solve it. By comparing the exact solution with numeric solution obtained by Algorithm 1, we can study the convergence rate of Algorithm 1 via numerical simulations. The detailed numerical studies of the convergence rate is provided in Section 5.1.

3.3 A generalization of partially linear models

In this subsection, we consider a generalization of partially linear models, where the responses can be expressed as

(7)

In the partially linear models (7),

is a vector of regression coefficients associated with

, is an unknown function of with some known smoothness, which is usually a one dimensional scalar, and is a random noise. The partially linear model (7) can be estimated by the partial spline estimator (Wahba, 1984; Heckman, 1986), partial residual estimator (Speckman, 1988; Chen et al., 1988), or SCAD-penalized regression (Xie et al., 2009).

In this work, we consider a more general model. Suppose we observe data on for , where

(8)

and ’s are i.i.d. random errors with mean zero and finite variance. We assume that the function , where is the Sobolev space with known smoothness . This is a standard assumption in nonparametric regression, see Gu (2013); van de Geer (2000) for examples. It is natural to define the two function classes by

and , where denotes the Euclidean distance, and is a known constant. In practice, we can choose a sufficient large such that is fulfilled. Note that in our generalization of the partially linear model, we cannot separate variables and , which is not the case of the partially linear model (7). It can be seen that and are non-identifiable because . Furthermore, is more interpretable compared with because it is linear.

In order to uniquely identify and , we need to restrict function class such that and are separable. This can be done by applying a newly developed approach, employing the projected kernel (Tuo, 2019). Let , be an orthonormal basis of . Then can be defined as a linear span of the basis , and the projection of a function on is given by

(9)

The perpendicular component is

(10)

By (9) and (10), we can split into two perpendicular classes as and , where . Let , where . Since and are perpendicular, they are -separable. By Lemma 1, and are unique. However, in practice it is usually difficult to find a function directly. We propose using projected kernel ridge regression, which depends on the reproducing kernel Hilbert space generated by the projected kernel.

Let be the (isotropic) Matérn family (Stein, 1999), defined by

(11)

where is the modified Bessel function of the second kind, and is the range parameter. We use to denote the reproducing kernel Hilbert space generated by , and to denote the norm of . By Corollary 10.48 in Wendland (2004), coincides with . The reproducing kernel Hilbert space generated by the projected kernel can be defined in the following way. Define the linear operators and : as

for . The projected kernel of can be defined by

(12)

The function class then is equivalent to the reproducing kernel Hilbert space generated by , denoted by , and the norm is denoted by . For detailed discussion and properties of and , we refer to Tuo (2019).

By using the projected kernel of , the double penalty model is

(13)

where are estimators of . In practice, we can use generalized cross validation (GCV) to choose the tuning parameter (Tuo, 2019; Wahba, 1990). If the tuning parameter is chosen properly, we can show that are consistent, as stated in the following theorem. In the rest of this paper, we use the following notation. For two positive sequences and , we write if, for some constants , .

Theorem 2

Suppose ’s are uniformly distributed on , and the noise ’s are i.i.d. sub-Gaussian, i.e., satisfying for some constants and , and all . Futhermore, assume Assumption A.1 in Appendix holds. If , we have

Theorem 2 shows that the double penalty model (13) can provide consistent estimators of and , and the convergence rate of is known to be optimal (Stone, 1982). The convergence rate of in Theorem 2 is slower than the convergence rate in the linear model. We conjecture that the convergence rate of in Theorem 2 is optimal since it is influenced by the estimation of , which may introduce extra error because functions in and have the same input space.

In order to solve the optimization problem in (13), we apply Algorithm 1. It can be checked that and are separable with respect to the empirical norm, as presented in the following corollary.

Corollary 2

and are separable with respect to the empirical norm with probability tending to one, as the sample size goes to infinity.

Corollary 2 is a direct result of Lemma 2, thus the proof is omitted. By Theorem 1 and Corollary 2, the convergence of Algorithm 1 can be guaranteed. In each iteration of Algorithm 1, and are solved by

which have explicit forms as

where , , , , and . Because and are orthogonal, a few iterations of Algorithm 1 are sufficient to obtain a good numeric solution.

4 Non-separable function classes

In Section 3, we consider the case that and are -separable, which implies and are statistically identifiable. However, in many practical cases, and are not -separable. Such examples include as a linear function class and

as the function space generated by a neural network. If

and are not -separable, then and are not statistically identifiable. To see this, note that there exist two sequences of functions and such that . This implies that and are not statistically identifiable, which implies that we cannot consistently estimate and .

Although and can be not -separable, we can still use (1) to specify and . We propose choosing with simple structure and to be “easy to interpret,” and choosing to be flexible to improve the prediction accuracy. The tradeoff between interpretation and prediction accuracy can be adjusted by applying different penalty functions and . If is large, then (1) forces to be small and to be large, which indicates that the model is more flexible, but is less interpretable. On the other hand, if is large, then the model is more interpretable, but may reduce the power of prediction.

The theoretical analysis of arbitrary function classes and varies case by case, and depends on the structure of function classes. We do not discuss theoretical results of specifying non--separable function classes, which are beyond the scope of this paper.

4.1 Convergence of Algorithm 1

For non-separable function classes, a stronger condition on penalty functions is needed to show the convergence of Algorithm 1. Let be a (semi-)norm of a Hilbert space. A function is said to be strongly convex with respect to (semi-)norm if there exists a parameter such that for any in the domain and ,

As a simple example, is strongly convex for any norm . If or is strongly convex, Algorithm 1 converges, as stated in the following theorem.

Theorem 3

Suppose or is strongly convex with respect to the empirical norm with parameter . We have

and , as goes to infinity.

From Theorem 3, it can be seen that Algorithm 1 can also achieve a linear convergence if or is strongly convex, even if and are not statistical identifiable. We only require one penalty function to be strongly convex. The convergence rate depends on the parameter , which measures the convexity of a function. If the penalty function is more convex, i.e., is larger, then the convergence of Algorithm 1 is faster. The strong convexity of the penalty function or

can be easily fulfilled, because the square of norm of any Hilbert space is strongly convex. For example, the penalty functions in the ridge regression and the neural networks with fixed number of neuron are strongly convex with respect to the empirical norm.

We summarize Theorem 1 and Theorem 3 in the following corollary.

Corollary 3

Suppose and are separable with respect to the empirical norm, or one function of and is strongly convex with respect to the empirical norm. We have

and , as goes to infinity.

5 Numerical examples

5.1 Convergence rate of Algorithm 1

In this subsection, we report numerical studies on the convergence rate of Algorithm 1, and verify the convergence rate in Theorem 1 is sharp. We consider two finite function classes such that the analytic solution of (6) is available, as stated in Section 3.2. By comparing the numeric solution and the analytic solution, we can verify the convergence rate is sharp.

We consider two function classes and , where , is a known parameter which controls the degree of separation of two function classes, i.e., the parameter in Lemma 1. It is easy to verify that for and ,

Let

Suppose the underlying function with . Let be the solution to (6), and be the values obtained at th iteration of Algorithm 1. By Theorem 1,

(14)

By taking logarithms on both sides of (5.1), we have

(15)

where the approximation is because of Corollary 1. If the convergence rate in Theorem 1 is sharp, is an approximate linear function with respect to and the slope is close to .

In our simulation studies, we choose . We choose the noise , where

is a normal distribution with mean zero and variance

. The algorithm stops if the left hand side of (5.1) is less than . We choose 50 uniformly distributed points as training points. We run 100 simulations and take the average of the regression coefficient and the number of iterations needed for each . The results are shown in Table 1.

Regression coefficient Iteration numbers Absolute difference
2 0.978 -0.045 -0.050 491.55 0.006
3 0.828 -0.378 -0.419 59.02 0.040
3.5 0.615 -0.973 -1.121 22.34 0.148
4 0.304 -2.383 -2.624 10 0.241
Table 1: The simulation results when the sample size is fixed. The last column show the absolute difference between the third column and the fourth column, given by Regression coefficient.

Corollary 1 shows that the approximation in (5.1) is more accurate when the sample size is larger. We conduct numerical studies using sample sizes . We choose . The results are presented in Table 2.

Sample size Regression coefficient Iteration numbers Absolute difference
20 -0.110 225.05 0.269
50 -0.410 60.26 0.0315
100 -0.404 61 0.0260
150 -0.363 68 0.0148
200 -0.381 65 0.00244
Table 2: The simulation results under different sample sizes. The last column shows the absolute difference between and regression coefficients, given by Regression coefficient.

From Tables 1 and 2, we find that the absolute difference increases as increases and sample size decreases. When decreases, the iteration number decreases, which implies the convergence of Algorithm 1 becomes faster. These results corroborate our theory. The regression coefficients are close to our theoretical assertion , which verifies the convergence rate in Theorem 1 is sharp.

5.2 Prediction of double penalty model

To study the prediction performance of double penalty model, we consider two examples, with -separable function classes and non--separable function classes, respectively.

Example 1

Consider function (Gramacy and Lee, 2012)

Let , and be the reproducing kernel Hilbert space generated by the projected kernel. The projected kernel is calculated as in (12), where is as in (11) with and . We use 20 uniformly distributed points from as training points, and let . For each simulation, we calculate the mean squared prediction error, which is approximated by calculating the mean squared prediction error on 201 evenly spaced points. We run 100 simulations, and the average mean squared prediction error is 0.016. In this example, the iteration number needed in Algorithm 1 is less than three because the two function classes are orthogonal, which corroborates the results in Corollary 1. Figure 2 in Appendix illustrates one simulation result.

Example 2

Consider a modified function of Sun et al. (2014)

We use , and as the reproducing kernel Hilbert space generated by , where is as in (11) with and . Note that and are not -separable because .

The double penalty model is

(16)

We choose , where is the sample size. The noise , where is chosen to be and . The iteration numbers are fixed in each simulation, with values . We choose maximin Latin hypercube design (Santner et al., 2003) with sample size 50 as the training set. We run 100 simulations for each case and calculated the mean squared prediction error on the testing set, which is the first 1000 points of the Halton sequence (Niederreiter, 1992).

For the conciseness of this paper, the simulation results are reported in Appendix. Here we present the main findings from the simulation results: (i) The prediction error in all cases are small, which suggests that the double penalty model can make accurate prediction. (ii) If we increase , the training error decreases. The prediction error decreases when is relatively large, and becomes large when is too small. (iii) One iteration in Algorithm 1 is sufficient to obtain a good solution of (16). (iv) The training error of the case with smaller is smaller. If is chosen properly, the prediction error of the case with smaller is small. However, there is no much difference of the prediction error under the cases and when is large. (v) For all values of , the norm of the linear function does not varies a lot. The norm of , on the other hand, increases as decreases. This is because a smaller implies less penalty on . (vi) Comparing the values of the norm of and the norm of , we can see the norm of is much larger, which is desired because we tend to maximize the interpretable part, which is linear functions in this example.

6 Application to real datasets

To illustrate, we apply the approach to two datasets. The first dataset is Goodrich et al. (2014), which includes 50 human fecal microbiome features for

unrelated individuals, of genetic sequence tags corresponding to bacterial taxa, and with a response variable of log-transformed body mass index (BMI). To increase the prediction accuracy, we first reduce the number of original features to the final dataset using the HFE cross-validated approach

(Oudah and Henschel, 2018), as discussed in Zhou and Gallins (2019). The second dataset is the diabetes dataset from the lars R package, widely used to illustrate penalized regression (Hastie et al., 2005). The response is a log-transformed measure of disease progression one year after baseline, and predictor features are ten baseline variables, age, sex, BMI, average blood pressure, and six blood serum measurements.

Following Algorithm 1, we let denote the LASSO algorithm (Tibshirani (1996), interpretable part), and use the built-in penalty as , with parameter as implemented in the R package glmnet

. For the “uninterpretable” part, we use the xgboost decision tree approach, with built-in

penalty as , with parameter as implemented in the R package xgboost (Chen and Guestrin, 2016). For xgboost, we set an penalty as zero throughout, with other parameters (tree depth, etc.), set by cross-validation internally, while preserving convexity of . We also set the maximum number of boosting iterations at ten. At each iterative step of LASSO and xgboost, ten simulations of five-fold cross-validation were performed and the predicted values were then averaged.

Finally, in order to explore the tradeoffs between the interpretable and uninterpretable parts, we first establish a range-finding exercise for the penalty tuning parameters on the logarithmic scale, such that for constant . We refer to this tradeoff as the transect between the tuning parameters, with low values of , for example, emphasizing and placing weight on the interpretable part by enforcing a low penalty for overfitting. To illustrate performance, we use the Pearson correlation coefficient between the response vector and the average (cross-validated) values of , and over the transect. The correlations are of course directly related to the objective function term , but are easier to interpret. Note that and are not orthogonal, so the correlations do not partition into the overall correlation of with . Additionally, as a final comparison, we compute these correlation values over the entire grid of values, to ensure that the transect was largely capturing the best choice of tuning parameters.

For the Goodrich microbiome data, Figure 1 top panel shows the correlations between and the three cross-validated predictors over the transect. Low values of are favored, although it is clear that the decision tree is favored throughout most of the transect, i.e. has much higher correlations with than with . Using in the range of (-2,-1) maximizes the correlation with the interpretable portion, while still achieving near the overall maximum correlation for the combined prediction rule (correlation of nearly 0.5). Our subjective “best balance” region for the interpretable portion is shown on the figure. Figure 1 bottom panel shows the analogous results for the diabetes dataset. Here LASSO provides overall good predictions for small tuning parameter , and provides good correlations (in the range 0.55-0.6) of with , and . As the tuning parameter increases, the correlation between and falls off dramatically, and our suggested “best balance” point is also shown. In no instance were the correlation values for the full grid of more than 0.015 greater than the greatest value observed along the transects.

Figure 1: Top panel: cross-validated correlations between and each of , , and for the microbiome dataset, where the tuning parameters vary along the transect as described in the text. Bottom panel: the analogous correlations for the diabetes dataset. Grey region and black vertical line represent suggested tuning parameter values to maximize interpretability while preserving high prediction accuracy.

7 Discussion

In this work, we propose using a double penalty model as a means of balancing tradeoffs between the interpretable part and the non-interpretable portions of a prediction model. Unlike much statistical modelling which focuses primarily on prediction accuracy, we consider the interpretability as an explicit feature. The user can specifies the function classes for the two components of the model, and is able to judge the tradeoff between interpretability and overall prediction accuracy. Our model is very flexible and can be used in many practical scenarios. If two function classes are orthogonal, the convergence of the algorithm provided in this work is very fast. This observation inspires potential future work, given any two function classes, to construct two separable function classes that are orthogonal, and to obtain consistency results, since the two portions are identifiable.

Although our interest here is theoretical, we have also illustrated how the fitting algorithm can be used in practice to make the relative contribution of large, while not substantially degrading overall predictive performance. Further practical implications and implementation issues will be described elsewhere.

Appendix A Proof of Lemmas

a.1 Proof of Lemma 1

The proof is straightforward. Suppose there exist another two functions and such that . By (4), we have

where the equality holds only when . Thus, we finish the proof.

a.2 Proof of Lemma 2

The proof of Lemma 2 relies on the entropy number. Let be a metric space with metric , and is a space. The -covering number of the metric space , denoted as , is the minimum integer so that there exist distinct balls in with radius , and the union of these balls covers . Let