Boolean Decision Rules via Column Generation

05/24/2018 ∙ by Sanjeeb Dash, et al. ∙ ibm 0

This paper considers the learning of Boolean rules in either disjunctive normal form (DNF, OR-of-ANDs, equivalent to decision rule sets) or conjunctive normal form (CNF, AND-of-ORs) as an interpretable model for classification. An integer program is formulated to optimally trade classification accuracy for rule simplicity. Column generation (CG) is used to efficiently search over an exponential number of candidate clauses (conjunctions or disjunctions) without the need for heuristic rule mining. This approach also bounds the gap between the selected rule set and the best possible rule set on the training data. To handle large datasets, we propose an approximate CG algorithm using randomization. Compared to three recently proposed alternatives, the CG algorithm dominates the accuracy-simplicity trade-off in 7 out of 15 datasets. When maximized for accuracy, CG is competitive with rule learners designed for this purpose, sometimes finding significantly simpler solutions that are no less accurate.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Interpretability has become a well-recognized goal for machine learning models. The need for interpretable models is certain to increase as machine learning pushes further into domains such as medicine, criminal justice, and business, where such models complement human decision-makers and decisions can have major consequences on human lives. Transparency is thus required for domain experts to understand, critique, and trust models, and reasoning is required to explain individual decisions.

This paper considers Boolean rules in either disjunctive normal form (DNF, OR-of-ANDs) or conjunctive normal form (CNF, AND-of-ORs) as a class of interpretable models for binary classification. An example of a DNF rule with two clauses is “IF (# accounts ) OR (# accounts AND debt ) THEN risk = high”. Particularly desirable for interpretability are compact Boolean rules with few clauses and conditions in each clause. DNF classification rules are also referred to as decision rule sets, where each conjunction is considered an individual rule, rules are unordered, and a positive prediction is made when at least one of the rules is satisfied. Rule sets stand in contrast to decision lists rivest1987 ; letham2015 ; wangf2015 ; angelino2017 ; lakkaraju2017 ; yang2017

, where rules are ordered in an IF-ELSE sequence, and decision trees

breiman1984 ; quinlan1993 ; bertsimas2017 , where they are organized into a tree structure. While the latter two classes are also considered interpretable, the metrics for measuring their complexity are different and not directly comparable freitas2014 . Moreover, a user study lakkaraju2016 has quantified the extra effort involved in understanding decision lists due to the need to account for the negations of all preceding rules.

The learning of Boolean rules and rule sets has an extensive history spanning multiple fields. DNF learning theory (e.g. valiant1984 ; klivans2004 ; feldman2012 ) focuses on the ideal noiseless setting (sometimes allowing arbitrary queries) and is less relevant to the practice of learning compact models from noisy data. Predominant practical approaches include a covering or separate-and-conquer strategy (clark1989 ; clark1991 ; cohen1995 ; frank1998 ; friedman1999 ; marchand2003 , see also the survey fuernkranz2012 ) of learning rules one by one and removing “covered” examples, a bottom-up strategy of combining more specific rules into more general ones salzberg1991 ; domingos1996 ; muselli2002 , and associative classification in which association rule mining is followed by rule selection using various criteria liu1998 ; li2001 ; yin2003 ; wang2005 ; chen2006 ; cheng2007 . Broadly speaking, these approaches employ heuristics and/or multiple criteria not directly related to classification accuracy. Moreover, they do not explicitly consider model complexity, a problem that has been noted especially with associative classification. Rule set models have been generalized to rule ensembles cohen1999 ; friedman2008 ; dembczynski2010 , using boosting and linear combination rather than logical disjunction; the interpretability of such models is again not comparable to rule sets. Models produced by logical analysis of data boros2000 ; hammer2006 from the operations research community are similarly weighted linear combinations.

In recent years, spurred by the demand for interpretable models, several researchers have revisited Boolean and rule set models and proposed methods that jointly optimize accuracy and simplicity within a single objective function. These works however have both restricted the problem and approximated its solution. In lakkaraju2016 ; wang2017 ; wang2015 , frequent rule miners are first used to produce a set of candidate rules. A greedy forward-backward algorithm lakkaraju2016 , simulated annealing wang2017 , or integer programming (IP) (in an unpublished manuscript wang2015 ) are then used to select rules from the candidates. The drawback of rule mining is that it limits the search space while often still producing a large number of rules, which then have to be filtered using criteria such as information gain. wang2015 also presented an IP formulation (but no computational results) that jointly constructs and selects rules without pre-mining. su2016

developed an IP formulation for DNF and CNF learning in which the number of clauses (conjunctions or disjunctions) is fixed. The problem is then solved approximately by decomposing into subproblems and applying a linear programming (LP) method

malioutov2013 , which requires rounding of fractional solutions.

In this paper, we also propose an IP formulation for Boolean rule (DNF or CNF) learning but one that avoids the above limitations. Rather than mining rules, we use the large-scale optimization technique of column generation (CG) to intelligently search over the exponential number of all possible clauses, without enumerating even a pre-mined subset (which can be large). Instead, only those clauses that can improve the current solution are generated on the fly. In practice, our approach solves the IP formulation to provable optimality for smaller datasets. For large datasets we employ an approximate version of CG by randomly selecting samples and candidate features that can be used in a clause. To speed up computation, we also generate additional clauses using a greedy algorithm that still optimizes the correct objective.

A numerical evaluation is presented using datasets. In terms of the trade-off achieved between accuracy and rule simplicity, our CG algorithm dominates three other recent proposals on datasets, whereas each of the others dominates on at most two. When optimized for accuracy using cross-validation, CG remains competitive with rule learners such as RIPPER cohen1995 that are designed for maximum accuracy. In some instances it provides significantly less complex models with little to no sacrifice in accuracy.

2 Problem formulation

We consider supervised binary classification given a training dataset of samples , with labels . Let the set be partitioned into where contains the indices of the samples with label and contains the ones with label . For the problem formulation in this section, all features ,

are assumed to be binary-valued as well; binarization of numerical and categorical features is discussed in Section 

4.

The presentation focuses on the problem of learning a Boolean classifier

in DNF (OR-of-ANDs). Given a DNF and binary-valued features, a clause corresponds to a conjunction of features and a sample satisfies a clause if it has all features contained in the clause (i.e.  for all such features ). Since a DNF classifier is equivalent to a rule set, the terms clause, conjunction, and (single) rule (within a rule set) are used interchangeably. As shown in su2016 using De Morgan’s laws, the same formulation applies equally well to CNF learning by negating both labels and features . The method can also be extended to multi-class classification in the usual one-versus-rest manner.

2.1 An integer program to minimize Hamming loss

Our objective is to minimize the Hamming loss of the rule set as is also done in su2016 ; lakkaraju2016

. For each incorrectly classified sample, the Hamming loss counts the number of clauses that have to be selected or removed to classify it correctly. More precisely, it is equal to the number of samples with label 1 that are classified incorrectly (false negatives) plus the sum of the number of selected clauses that each sample with label 0 satisfies. Thus while each false negative contributes one unit to this loss function, representing a single clause that needs to be selected, a false positive would contribute more than one unit if it satisfies multiple clauses, which must all be removed.

We bound the complexity of the rule set by a given parameter , both to prevent over-fitting and to control complexity. For concreteness, we define the complexity of a clause to be a fixed cost of one plus the number of conditions in the clause; other linear combinations can be handled equally well. The total complexity of a rule set is defined as the sum of the complexities of its clauses. Alternatively, it is possible to include an additional term in the objective function to penalize complexity but we find it more natural to explicitly bound the maximum complexity as it can offer better control in applications where interpretable rules are preferred. Clearly it is also possible to use both a constraint and a penalty term.

We express the above notions of Hamming loss and complexity in an integer program (IP) that is not practical for real-life datasets as written but is useful to explain the conceptual framework behind our approach. Let denote the collection of all possible (exponentially many) clauses involving , and contain the clauses satisfied by sample for all . Note that as the features are binary, is indeed bounded. Letting decision variable for denote whether clause is used in the rule set, denote the complexity of clause , and for denote the positive samples classified incorrectly, we have the following IP:

(1)
s.t. (2)
(3)
(4)

The objective function (1) is the Hamming loss as described. Constraints (2) identify false negatives, which have and are therefore not “covered” by any selected clauses. Note that being binary implies that in any optimal solution because of the objective function. Constraint (3) bounds the complexity of the rule set. We call this formulation the Master IP (MIP) and call its linear programming (LP) relaxation, obtained by dropping the integrality constraint (4), the Master LP (MLP), denoting its optimal value by . It is also possible to weight the two terms in the objective (1) differently, for example to balance unequal classes, but we do not pursue that variation here.

2.2 Column generation framework

Clearly it is only practical to solve the Master IP for very small datasets. Moreover, even solving the Master LP explicitly is often intractable due to the fact that it has exponentially many variables. An effective way to solve such large LPs is to use the column generation framework colgen98 ; IPref where only a small subset of all possible variables (clauses) is generated explicitly and the optimality of the LP is guaranteed by iteratively solving a pricing problem.

To apply this framework to the MIP, the first step is to restrict the formulation by replacing the set with a very small subset of it and explicitly solve the LP relaxation of the resulting smaller problem. The optimal solution to this so-called Restricted MLP provides an upper bound on and it can potentially be improved by augmenting the Restricted MLP with additional variables corresponding to the missing clauses. The second step is to identify these clauses without explicitly considering all of them. Repeating these steps until there are no improving clauses (i.e. variables missing from the Restricted MLP that can reduce the cost) solves the MLP to optimality.

To find the missing clauses that can potentially improve the value of the Restricted MLP, one needs to check if there are variables missing from the Restricted MLP that have negative reduced cost LPref . This can be done using the optimal dual solution to the Restricted MLP. Toward this end, let for denote the dual variables associated with constraints (2) and be the dual variable associated with (3). Let denote whether the th sample satisfies a missing clause in question. If we let denote the complexity of the clause, then its reduced cost is equal to

(5)

The first term in (5) is the cost of the missing clause in the objective function (1), expressed in terms of . The second term is the sum of the dual variables associated with constraints (2) in which the clause appears. The last term is the dual variable associated with constraint (3) multiplied by the complexity of the clause.

We now formulate an IP to express clauses as conjunctions of the original features , . Let the decision variable denote if feature is selected in the clause. Let correspond to the zero-valued features in sample , . Then the Pricing Problem below identifies the clause missing from the Restricted MLP that has the lowest reduced cost.

(6)
s.t. (7)
(8)
(9)
(10)

The first term in (6) expresses the complexity in terms of the number of selected features. Constraints (7), (8) ensure that the clause acts as a conjunction, i.e. it is satisfied () only if no zero-valued features are selected ( for ). Similar to in MIP, the variables do not have to be explicitly defined as binary due to the objective function. Constraint (9) bounds the number of features allowed in any clause in the rule set. Parameter above can be set to to relax this constraint, or it can be set to a smaller number if desired to limit the clause complexity.

The optimal solution to the Pricing Problem above gives the clause with the minimum reduced cost that is missing from the Restricted MLP. The reduced cost of this clause equals and if , then the corresponding variable is added to the Restricted MLP. More generally, any feasible solution to the Pricing Problem that has a negative objective function value gives a clause with a negative reduced cost and therefore can be added to the Restricted Restricted MLP to improve its value.

2.3 Optimality guarantees and bounds

When the column generation framework described above is repeated until , none of the variables missing from the Restricted MLP have a negative reduced cost and the optimal solution of the MLP and the Restricted MLP coincide. In addition, if the optimal solution of the Restricted MLP turns out to be integral, then it is also an optimal solution to the MIP and therefore MIP is solved to optimality. If the optimal solution of the Restricted MLP is fractional, then one may have to use column generation within an enumeration framework to solve MIP to optimality. This approach is called branch-and-price colgen98 and is quite computationally intensive.

However, even when the optimal solution to the MLP is fractional, provides a lower bound on as the objective function (1) has integer coefficients. This lower bound can be compared to the cost of any feasible solution to MIP. If the latter equals , then, once again, MIP is solved to optimality. As one example, a feasible solution to MIP could be obtained by solving the Restricted MIP obtained by imposing (4) on the variables present in the Restricted MLP. More generally, any heuristic method can generate feasible solutions to MIP.

Finally, we note that even when the MLP is not solved to optimality and the column generation procedure is terminated prematurely, a valid lower bound on can be obtained by , where is the objective value of the last Restricted MLP solved to optimality. This bound is due to the fact that for any clause and there might be at most missing variables with reduced cost no less than that can be added to the Restricted MLP VanderbeckWolsey96 .

3 Computational Approach

The previous section provides a sound theoretical framework for finding an optimal rule set for the training data. For small datasets, defined loosely as having less than a couple of thousand samples and less than a few hundred binary (binarized) features (this includes the mushroom and tic-tac-toe UCI datasets appearing in Section 4), it is computationally feasible to employ this optimization framework as described in Section 2. However, to handle larger datasets within a time limit of 10 or 20 minutes, one has to sacrifice the optimality guarantees of the framework. We next describe our computational approach to deal with larger datasets, which can be seen as an optimization-based heuristic. We call a dataset medium if it has more than a couple of thousand samples but less than a few hundred binary features. We call it large if it has many thousands of samples and more than several hundred binary features. The separation of datasets into small, medium and large is done based on empirical experiments to improve the likelihood that the Pricing Problem can produce negative reduced cost solutions.

For medium and large datasets, the number of non-zeros in the Pricing Problem (defined as the sum of the numbers of variables appearing in the constraints of the formulation) is at least 100,000 and solving this integer problem in a reasonable amount of time is not always feasible. Consequently solving the MLP to proven optimality is not likely. To deal with this practical issue, we terminate the Pricing problem if a fixed time limit is exceeded. We use a standard mixed-integer programming solver (CPLEX 12.7.1) to which a time limit can be provided.

While the solver is finding negative reduced cost clauses from the Pricing Problem, the presence of the time limit matters little. If the Pricing Problem is solved to optimality within the time limit, then we obtain a minimum reduced cost clause. Moreover, the solver might discover several negative reduced cost clauses within the time limit and it is possible to recover all these solutions at termination (due to optimality or time limit). To speed up the overall solution process, we add all the negative reduced cost clauses returned by the solver to the Restricted MLP. As long as one variable with a negative reduced cost is obtained, the column generation process continues.

Eventually, the solver will fail to find a negative reduced cost solution within the time limit. If the solver proves that there is no such solution to the Pricing Problem, then the MLP is solved to optimality. However, if non-existence cannot be proved within the time limit, then column generation using the Pricing Problem has to terminate without an optimality guarantee or a valid lower bound on the MIP. In this case, we employ a fast heuristic algorithm to continue to search for negative reduced cost solutions and extend the process.

Our heuristic algorithm only explores clauses that have up to features (we use in our experiments), and is as follows. We create all one-term clauses that can be potentially extended to negative reduced cost clauses, and then assign each of them a score that equals the objective function of the Pricing problem applied to the clause. For each clause size from 1 to , we do the following: we process all generated clauses that have features in increasing order of their score, and for each such clause we create new clauses by appending additional features. Whenever we find a clause with negative reduced cost, we add it to a potential list of solutions, and then when our enumeration terminates (we have an upper bound on the number of generated clauses), we return the best clauses generated by the heuristic before proceeding to the next value of .

In addition to the time limit on the Pricing Problem, we also have a time limit on the overall column generation process. Thus column generation terminates in two cases: 1) when an improving clause cannot be found, either because one is proven not to exist or because one cannot be found within the Pricing Problem time budget and the heuristic also fails to find one, or 2) when the overall time limit is met. At this point, we solve the Restricted MIP (the integral version of the Restricted MLP) using CPLEX, and use the solution as our classifier.

For large datasets, the Pricing Problem can have more than a million non-zeros and even solving its LP relaxation becomes challenging. In this case the solver can rarely produce any negative reduced-cost solutions within the time limit. To deal with this, we formulate an approximate Pricing Problem by randomly selecting a limited number of features and samples. We pick samples uniformly with a probability that on average leads to a formulation with a couple of thousand samples. If the resulting Pricing Problem has more than a hundred thousand non-zeros, then we also limit the candidate features that can form a clause. The candidate features are selected uniformly with a probability that leads to a formulation with one hundred thousand non-zeros. We also note that for large datasets the Restricted MLP can easily have more than one million non-zeros after generating several hundred columns and it is faster to solve it with the interior point algorithm in CPLEX instead of simplex .

4 Numerical Evaluation

Evaluations were conducted on 15 classification datasets from the UCI repository dua2017 that have been used in recent works on rule set/Boolean classifiers malioutov2013 ; dash2014 ; su2016 ; wang2017

. Details of how labels were binarized and missing values were treated can be found in the supplementary material (SM). Test performance on all datasets is estimated using

-fold stratified cross-validation (CV).

For comparison with our column generation (CG) algorithm, we considered three recently proposed alternatives that also aim to control rule complexity: Bayesian Rule Sets (BRS) wang2017 and the alternating minimization (AM) and block coordinate descent (BCD) algorithms from su2016 . Additional comparisons include the WEKA weka JRip implementation of RIPPER cohen1995 , a rule set learner that is still state-of-the-art in accuracy, and scikit-learn scikit-learn implementations of the decision tree learner CART breiman1984

and Random Forests (RF)

breiman2001

. The last is an uninterpretable model intended as a benchmark for accuracy. The SM includes further comparisons to logistic regression (LR) and support vector machines (SVM). The parameters of BRS and FPGrowth

borgelt2005 , the frequent rule miner that BRS relies on, were set as recommended in wang2017 and the associated code (see SM for details). For AM and BCD, the number of clauses was fixed at with the option to disable unused clauses; initialization and BCD updating are done as in su2016 . While both su2016 and our method are equally capable of learning CNF rules, for these experiments we restricted both to learning DNF rules only.

We also experimented with code made available by the authors of lakkaraju2016 . Unfortunately, we were unable to execute this code with practical running time when the number of mined candidate rules exceeded . Furthermore, the code was primarily designed to handle the interval representation of numerical features and not comparisons (see next paragraph). These limitations prevented us from making a full comparison. The SM includes partial results from lakkaraju2016 that are inferior to those from the other methods.

We used standard “dummy”/“one-hot” coding to binarize categorical variables into multiple

indicators, one for each category , as well as their negations . For numerical features, there are two common approaches. The first is to discretize by binning into intervals and then encode as above with categorical features. The second is to compare with a sequence of thresholds, again including negations (e.g. , and , ). For these experiments, we used the second comparison method, as also recommended in wang2017 ; su2016

, with sample deciles as thresholds. Furthermore, features were binarized in the same way for all classifiers in this comparison, which all rely on discretization (but not for LR and SVM in the SM). Thus the evaluation controls for binarization method in addition to using the same training-test splits for all classifiers.

We first evaluated the accuracy-simplicity trade-offs achieved by our CG algorithm as well as BRS, AM, and BCD, methods that explicitly perform this trade-off. We used an overall time limit of 300 seconds for training and a time limit of 45 seconds for solving the Pricing Problem in each iteration. As in Section 2, complexity is measured as the number of rules in the rule set plus the total number of conditions in the rules. For each algorithm, the parameter controlling model complexity (bound in (3), regularization parameter in su2016 , multiplier

in prior hyperparameter

from wang2017 ) is varied, resulting in a set of complexity-test accuracy pairs. A sample of these plots is shown in Figure 1 with the full set in the SM. Line segments connect points that are Pareto efficient, i.e., not dominated by solutions that are more accurate and at least as simple or vice versa. CG dominates the other algorithms in out of datasets in the sense that its Pareto front is consistently higher; it nearly does so on a th dataset (tic-tac-toe) and on a th (banknote), all algorithms are very similar. BRS, AM, and BCD each achieve (co-)dominance only one or two times, e.g. in Figure 0(c) for AM. Among cases where CG does not dominate are the highest-dimensional datasets (musk and gas, although for the latter CG does attain the highest accuracy given sufficient complexity) and ones where AM and/or BCD are more accurate at the lowest complexities. BRS solutions tend to cluster in a narrow range despite varying from to .

(a) Heart disease
(b) MAGIC gamma telescope
(c) Musk molecules
Figure 1: Rule complexity-test accuracy trade-offs on

datasets. Pareto efficient points are connected by line segments. Horizontal and vertical bars represent standard errors in the means. Overall, the proposed CG algorithm dominates the others on

of datasets (see the SM for the full set).

In a second experiment, nested CV was used to select values of for CG and for AM, BCD to maximize accuracy on each training set. The selected model was then applied to the test set. In these experiments we used an overall time limit of 120 seconds for each candiate value of and the time limit for the Pricing Problem was set to 30 seconds. To offset for the decrease in the time limit, we performed a second pass for each dataset solving the restricted MIP with all the clauses generated for all possible choices of . Mean test accuracy (over partitions) and rule set complexity are reported in Tables 1 and 2. For BRS, we fixed as optimizing did not improve accuracy on the whole (as can be expected from Figure 1). Tables 1 and 2 also include results from RIPPER, CART, and RF. We tuned the minimum number of samples per leaf for CART and RF, used trees for RF, and otherwise kept the default settings. The complexity values for CART result from a straightforward conversion of leaves to rules (for the simpler of the two classes) and are meant only for rough comparison.

dataset CG BRS AM BCD RIPPER CART RF
banknote () () () () () () ()
heart () () () () () () ()
ILPD () () () () () () ()
ionosphere () () () () () () ()
liver () () () () () () ()
pima () () () () () () ()
tic-tac-toe () () () () () () ()
transfusion () () () () () () ()
WDBC () () () () () () ()
adult () () () () () () ()
bank-mkt () () () () () () ()
gas () () () () () () ()
magic () () () () () () ()
mushroom () () () () () () ()
musk () () () () () () ()
Table 1: Mean test accuracy (%, standard error in parentheses)
dataset CG BRS AM BCD RIPPER CART
banknote () () () () () ()
heart () () () () () ()
ILPD () () () () () ()
ionosphere () () () () () ()
liver () () () () () ()
pima () () () () () ()
tic-tac-toe () () () () () ()
transfusion () () () () () ()
WDBC () () () () () ()
adult () () () () () ()
bank-mkt () () () () () ()
gas () () () () () ()
magic () () () () () ()
mushroom () () () () () ()
musk () () () () () ()
Table 2: Mean complexity (# clauses total # conditions, standard error in parentheses)

The superiority of CG compared to BRS, AM, and BCD is carried over into Table 1, especially for larger datasets (bottom partition in the table). Compared to RIPPER, which is designed to maximize accuracy, CG is very competitive. The head-to-head “win-loss” record is nearly even and on no dataset is CG less accurate by more than , whereas RIPPER is worse by on ionosphere, liver, and tic-tac-toe. Moreover on larger datasets, CG tends to learn significantly simpler rule sets that are nearly as or even more accurate than RIPPER, e.g. on bank-marketing and magic. CART on the other hand is less competitive in this experiment.

5 Conclusion

We have developed a column generation algorithm for learning interpretable DNF or CNF classification rules that efficiently searches the space of rules without pre-mining or other restrictions. Experiments have borne out the superiority of the accuracy-rule simplicity trade-offs achieved.

While the results in Table 1 are competitive with RIPPER, in some instances they fall short of the potential suggested in the first accuracy-complexity trade-off experiment. For example on the heart disease dataset, Figure 0(a) shows a maximum accuracy of while the value resulting from CV in Table 1 is only . For small datasets, the challenge is variability in estimating test accuracy. For large datasets, although we have proposed measures such as time limits and sampling to reduce the computational burden, these measures are applied more aggressively during cross-validation when many more instances need to be solved, thus affecting solution quality. We leave as future work improved procedures for optimizing parameter for accuracy.


References

  • (1) Rakesh Agrawal and Ramakrishnan Srikant. Fast algorithms for mining association rules. In Proc. Int. Conf. Very Large Data Bases (VLDB), pages 487–499, 1994.
  • (2) Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer, and Cynthia Rudin. Learning certifiably optimal rule lists. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 35–44, 2017.
  • (3) C. Barnhart, E. L. Johnson, G.L. Nemhauser, M.W.F . Savelsbergh, and P.H. Vance. Branch-and- price: Column generation for solving huge integer programs. Operations Research, 46:316–329, 1998.
  • (4) Mokhtar S. Bazaraa, John Jarvis, and Hanif D. Sherali. Linear programming and network flows. Wiley, 2010.
  • (5) Dimitris Bertsimas and Jack Dunn. Optimal classification trees. Mach. Learn., 106(7):1039–1082, July 2017.
  • (6) Christian Borgelt. An implementation of the FP-growth algorithm. In Proc. Workshop on Open Source Data Mining Software (OSDM), pages 1–5, 2005.
  • (7) Endre Boros, Peter L. Hammer, Toshihide Ibaraki, Alexander Kogan, Eddy Mayoraz, and Ilya Muchnik. An implementation of logical analysis of data. IEEE Transactions on Knowledge and Data Engineering, 12(2):292–306, Mar/Apr 2000.
  • (8) Leo Breiman. Random forests. Machine Learning, 45(1):5–32, October 2001.
  • (9) Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. Classification and Regression Trees. Chapman & Hall/CRC, 1984.
  • (10) Guoqing Chen, Hongyan Liu, Lan Yu, Qiang Wei, and Xing Zhang. A new approach to classification based on association rule mining. Decis. Support Syst., 42(2):674–689, November 2006.
  • (11) Hong Cheng, Xifeng Yan, Jiawei Han, and Chih-Wei Hsu. Discriminative frequent pattern analysis for effective classification. In Proc. IEEE Int. Conf. Data Eng. (ICDE), pages 716–725, 2007.
  • (12) Peter Clark and Robin Boswell. Rule induction with CN2: Some recent improvements. In Proceedings of the European Working Session on Machine Learning (EWSL), pages 151–163, 1991.
  • (13) Peter Clark and Tim Niblett. The CN2 induction algorithm. Machine Learning, 3(4):261–283, Mar 1989.
  • (14) William W. Cohen. Fast effective rule induction. In Proc. Int. Conf. Mach. Learn. (ICML), pages 115–123, 1995.
  • (15) William W. Cohen and Yoram Singer. A simple, fast, and effective rule learner. In Proc. Conf. Artif. Intell. (AAAI), pages 335–342, 1999.
  • (16) Michele Conforti, Gerard Cornuejols, and Giacomo Zambelli. Integer programming. Springer, 2014.
  • (17) Sanjeeb Dash, Dmitry M. Malioutov, and Kush R. Varshney. Screening for learning classification rules via Boolean compressed sensing. In Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), pages 3360–3364, 2014.
  • (18) Krzysztof Dembczyński, Wojciech Kotłowski, and Roman Słowiński. ENDER: a statistical framework for boosting decision rules. Data Mining and Knowledge Discovery, 21(1):52–90, Jul 2010.
  • (19) Pedro Domingos. Unifying instance-based and rule-based induction. Mach. Learn., 24(2):141–168, 1996.
  • (20) Dheeru Dua and Efi Karra Taniskidou. UCI machine learning repository, 2017.
  • (21) Vitaly Feldman. Learning DNF expressions from Fourier spectrum. In Proc. Conf. Learn. Theory (COLT), pages 17.1–17.19, 2012.
  • (22) Eibe Frank, Mark A. Hall, and Ian H. Witten. The WEKA workbench. In Online Appendix for "Data Mining: Practical Machine Learning Tools and Techniques". Morgan Kaufmann, 4th edition, 2016.
  • (23) Eibe Frank and Ian H. Witten. Generating accurate rule sets without global optimization. In Proc. Int. Conf. Mach. Learn. (ICML), pages 144–151, 1998.
  • (24) Alex A. Freitas. Comprehensible classification models – a position paper. ACM SIGKDD Explor., 15(1):1–10, 2014.
  • (25) Jerome H. Friedman and Nicholas I. Fisher.

    Bump hunting in high-dimensional data.

    Statistics and Computing, 9(2):123–143, April 1999.
  • (26) Jerome H. Friedman and Bogdan E. Popescu. Predictive learning via rule ensembles. Annals of Applied Statistics, 2(3):916–954, Jul 2008.
  • (27) Johannes Fürnkranz, Dragan Gamberger, and Nada Lavrač. Foundations of Rule Learning. Springer-Verlag, Berlin, 2014.
  • (28) Peter L. Hammer and Tibérius O. Bonates.

    Logical analysis of data—an overview: From combinatorial optimization to medical applications.

    Annals of Operations Research, 148(1):203–225, Nov 2006.
  • (29) Adam R. Klivans and Rocco A. Servedio. Learning DNF in time . J. Comput. Syst. Sci., 68(2):303–318, March 2004.
  • (30) Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proc. ACM SIGKDD Int. Conf. Knowl. Disc. Data Mining (KDD), pages 1675–1684, 2016.
  • (31) Himabindu Lakkaraju and Cynthia Rudin. Learning cost-effective and interpretable treatment regimes. In

    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS)

    , volume 54, pages 166–175, Fort Lauderdale, FL, USA, 20–22 Apr 2017.
  • (32) Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, and David Madigan. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model. Ann. Appl. Stat., September(3):1350–1371, 09 2015.
  • (33) Wenmin Li, Jiawei Han, and Jian Pei. CMAR: accurate and efficient classification based on multiple class-association rules. In Proc. IEEE Int. Conf. Data Min. (ICDM), pages 369–376, 2001.
  • (34) Bing Liu, Wynne Hsu, and Yiming Ma. Integrating classification and association rule mining. In Proc. ACM SIGKDD Int. Conf. Knowl. Disc. Data Min. (KDD), pages 80–86, 1998.
  • (35) Dmitry M. Malioutov and Kush R. Varshney. Exact rule learning via Boolean compressed sensing. In Proc. Int. Conf. Mach. Learn. (ICML), pages 765–773, 2013.
  • (36) Mario Marchand and John Shawe-Taylor. The set covering machine. J. Mach. Learn. Res., 3:723–746, 2002.
  • (37) Marco Muselli and Diego Liberati. Binary rule generation via Hamming clustering. IEEE Transactions on Knowledge and Data Engineering, 14(6):1258–1268, 2002.
  • (38) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • (39) J. Ross Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1993.
  • (40) Ronald L. Rivest. Learning decision lists. Machine Learning, 2(3):229–246, 1987.
  • (41) Steven Salzberg. A nearest hyperrectangle learning method. Mach. Learn., 6(3):251–276, 1991.
  • (42) Guolong Su, Dennis Wei, Kush R. Varshney, and Dmitry M. Malioutov. Learning sparse two-level Boolean rules. In Proc. IEEE Int. Workshop Mach. Learn. Signal Process. (MLSP), pages 1–6, September 2016.
  • (43) Leslie G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134–1142, November 1984.
  • (44) François Vanderbeck and Laurence A. Wolsey. An exact algorithm for IP column generation. Oper. Res. Lett., 19(4):151–159, 1996.
  • (45) Fulton Wang and Cynthia Rudin. Falling rule lists. In Proc. Int. Conf. Artif. Intell. Stat. (AISTATS), pages 1013–1022, 2015.
  • (46) Jianyong Wang and George Karypis. HARMONY: Efficiently mining the best rules for classification. In Proc. SIAM Int. Conf. Data Min. (SDM), pages 205–216, 2005.
  • (47) Tong Wang and Cynthia Rudin. Learning Optimized Or’s of And’s, November 2015. arXiv:1511.02210.
  • (48) Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, and Perry MacNeille. A Bayesian framework for learning rule sets for interpretable classification. Journal of Machine Learning Research, 18(70):1–37, 2017.
  • (49) Hongyu Yang, Cynthia Rudin, and Margo Seltzer. Scalable Bayesian rule lists. In Proc. Int. Conf. Mach. Learn. (ICML), pages 1013–1022, 2017.
  • (50) Xiaoxin Yin and Jiawei Han. CPAR: Classification based on predictive association rules. In Proc. SIAM Int. Conf. Data Min. (SDM), pages 331–335, 2003.

Appendix

Datasets and data processing

The UCI repository datasets were used largely as-is. We note the following deviations and label binarizations:

  • Liver disorders: We used the number of drinks as the output variable as recommended by the data donors rather than the selector variable. The number of drinks was binarized as either or .

  • Gas sensor array drift: The label was binarized as either or as in dash2014 .

  • Heart disease: We used only the Cleveland data and removed samples with ‘ca’ = ?, yielding samples. The label was binarized as either or as in other works.

BRS parameters

We followed wang2017 and its associated code in setting the parameters of BRS and FPGrowth, the frequent rule miner that BRS relies on: minimum support of and maximum length for FPGrowth; reduction to candidate rules using information gain (this reduction was triggered in all cases); , , and simulated annealing chains of iterations for BRS itself.

Accuracy-simplicity trade-offs for all datasets

Below in Figures 2 and 3 is the full set of accuracy-simplicity trade-off plots for all datasets, including the from the main text.

(a) banknote
(b) heart
(c) ILPD
(d) ionosphere
(e) liver
(f) pima
(g) tic-tac-toe
(h) transfusion
Figure 2: Rule complexity-test accuracy trade-offs. Pareto efficient points are connected by line segments.
(a) WDBC
(b) adult
(c) bank-marketing
(d) gas
(e) magic
(f) mushroom
(g) musk
Figure 3: Rule complexity-test accuracy trade-offs. Pareto efficient points are connected by line segments.

Results for additional classifiers

As discussed in the main text, we were unable to execute code from the authors of Interpretable Decision Sets (IDS) lakkaraju2016 with practical running time when the number of candidate rules mined by Apriori agrawal1994 exceeded . While it is possible to limit this number by increasing the minimum support and decreasing the maximum length parameters of Apriori, we did not do so beyond a support of and length of (same values as with FPGrowth for BRS) as it would severely constrain the resulting candidate rules. Thus we opted to run IDS only on those datasets for which Apriori generated fewer than candidates given minimum support of and either maximum length of or unbounded length.

In terms of the settings for IDS itself, we ran a deterministic version of the local search algorithm with as recommended by the authors. We set to have equal costs for false positive and negatives, consistent with the other algorithms. For simplicity, the overlap parameters and were set equal to each other and tuned separately for accuracy, yielding . was set to as it is not necessary for binary classification. Lastly, and were set equal to each other to reflect the choice of complexity metric as the number of rules plus the sum of their lengths. We then varied over a range to trade accuracy against complexity.

Our partial results for IDS are shown in Tables 3 and 5. Despite “cheating” in the sense of choosing to maximize accuracy after all the test results were known, the performance is not competitive with the other rule set algorithms on most datasets. In addition to the constraints placed on Apriori, we suspect that another reason is that the IDS implementation available to us is designed primarily for the interval representation of numerical features (see Section 4) and is not easily adapted to handle the alternative representation.

dataset CG BRS AM BCD IDS RIPPER
banknote () () () () () ()
heart () () () () ()
ILPD () () () () () ()
ionosphere () () () () ()
liver () () () () () ()
pima () () () () () ()
tic-tac-toe () () () () ()
transfusion () () () () () ()
WDBC () () () () () ()
adult () () () () ()
bank-mkt () () () () ()
gas () () () () ()
magic () () () () () ()
mushroom () () () () ()
musk () () () () ()
Table 3: Mean test accuracy for rule set classifiers (%, standard error in parentheses)

In Table 4, accuracy results of logistic regression (LR) and support vector machine (SVM) classifiers are included along with those of non-rule set classifiers from the main text (CART and RF). Although LR is a generalized linear model, it may not be regarded as interpretable in many application domains.

dataset CART RF LR SVM
banknote () () () ()
heart () () () ()
ILPD () () () ()
ionosphere () () () ()
liver () () () ()
pima () () () ()
tic-tac-toe () () () ()
transfusion () () () ()
WDBC () () () ()
adult () () () ()
bank-mkt () () () ()
gas () () () ()
magic () () () ()
mushroom () () () ()
musk () () () ()
Table 4: Mean test accuracy for other classifiers (%, standard error in parentheses)
dataset CG BRS AM BCD IDS RIPPER CART
banknote () () () () () () ()
heart () () () () () ()
ILPD () () () () () () ()
ionosphere () () () () () ()
liver () () () () () () ()
pima () () () () () () ()
tic-tac-toe () () () () () ()
transfusion () () () () () () ()
WDBC () () () () () () ()
adult () () () () () ()
bank-mkt () () () () () ()
gas () () () () () ()
magic () () () () () () ()
mushroom () () () () () ()
musk () () () () () ()
Table 5: Mean complexity (# clauses total # conditions, standard error in parentheses)