Interpretable Classification Models for Recidivism Prediction

03/26/2015
by   Jiaming Zeng, et al.
0

We investigate a long-debated question, which is how to create predictive models of recidivism that are sufficiently accurate, transparent, and interpretable to use for decision-making. This question is complicated as these models are used to support different decisions, from sentencing, to determining release on probation, to allocating preventative social services. Each use case might have an objective other than classification accuracy, such as a desired true positive rate (TPR) or false positive rate (FPR). Each (TPR, FPR) pair is a point on the receiver operator characteristic (ROC) curve. We use popular machine learning methods to create models along the full ROC curve on a wide range of recidivism prediction problems. We show that many methods (SVM, Ridge Regression) produce equally accurate models along the full ROC curve. However, methods that designed for interpretability (CART, C5.0) cannot be tuned to produce models that are accurate and/or interpretable. To handle this shortcoming, we use a new method known as SLIM (Supersparse Linear Integer Models) to produce accurate, transparent, and interpretable models along the full ROC curve. These models can be used for decision-making for many different use cases, since they are just as accurate as the most powerful black-box machine learning models, but completely transparent, and highly interpretable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2022

Generalized Gloves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance

For many years, machine learning methods have been used in a wide range ...
research
01/21/2021

Distilling Interpretable Models into Human-Readable Code

The goal of model distillation is to faithfully transfer teacher model k...
research
05/16/2014

Methods and Models for Interpretable Linear Classification

We present an integer programming framework to build accurate and interp...
research
08/03/2021

GalaxAI: Machine learning toolbox for interpretable analysis of spacecraft telemetry data

We present GalaxAI - a versatile machine learning toolbox for efficient ...
research
12/23/2022

Security and Interpretability in Automotive Systems

The lack of any sender authentication mechanism in place makes CAN (Cont...
research
06/27/2013

Supersparse Linear Integer Models for Interpretable Classification

Scoring systems are classification models that only require users to add...

Please sign up or login with your details

Forgot password? Click here to reset