Interpretable Classification Models for Recidivism Prediction

by   Jiaming Zeng, et al.

We investigate a long-debated question, which is how to create predictive models of recidivism that are sufficiently accurate, transparent, and interpretable to use for decision-making. This question is complicated as these models are used to support different decisions, from sentencing, to determining release on probation, to allocating preventative social services. Each use case might have an objective other than classification accuracy, such as a desired true positive rate (TPR) or false positive rate (FPR). Each (TPR, FPR) pair is a point on the receiver operator characteristic (ROC) curve. We use popular machine learning methods to create models along the full ROC curve on a wide range of recidivism prediction problems. We show that many methods (SVM, Ridge Regression) produce equally accurate models along the full ROC curve. However, methods that designed for interpretability (CART, C5.0) cannot be tuned to produce models that are accurate and/or interpretable. To handle this shortcoming, we use a new method known as SLIM (Supersparse Linear Integer Models) to produce accurate, transparent, and interpretable models along the full ROC curve. These models can be used for decision-making for many different use cases, since they are just as accurate as the most powerful black-box machine learning models, but completely transparent, and highly interpretable.



There are no comments yet.


page 1

page 2

page 3

page 4


Please Stop Explaining Black Box Models for High Stakes Decisions

There are black box models now being used for high stakes decision-makin...

Distilling Interpretable Models into Human-Readable Code

The goal of model distillation is to faithfully transfer teacher model k...

GalaxAI: Machine learning toolbox for interpretable analysis of spacecraft telemetry data

We present GalaxAI - a versatile machine learning toolbox for efficient ...

Methods and Models for Interpretable Linear Classification

We present an integer programming framework to build accurate and interp...

Learning Qualitatively Diverse and Interpretable Rules for Classification

There has been growing interest in developing accurate models that can a...

An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images

Algorithmic decision support is rapidly becoming a staple of personalize...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.