A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations

06/04/2021
by   Chaofan Chen, et al.
13

Lending decisions are usually made with proprietary models that provide minimally acceptable explanations to users. In a future world without such secrecy, what decision support tools would one want to use for justified lending decisions? This question is timely, since the economy has dramatically shifted due to a pandemic, and a massive number of new loans will be necessary in the short term. We propose a framework for such decisions, including a globally interpretable machine learning model, an interactive visualization of it, and several types of summaries and explanations for any given decision. The machine learning model is a two-layer additive risk model, which resembles a two-layer neural network, but is decomposable into subscales. In this model, each node in the first (hidden) layer represents a meaningful subscale model, and all of the nonlinearities are transparent. Our online visualization tool allows exploration of this model, showing precisely how it came to its conclusion. We provide three types of explanations that are simpler than, but consistent with, the global model: case-based reasoning explanations that use neighboring past cases, a set of features that were the most important for the model's prediction, and summary-explanations that provide a customized sparse explanation for any particular lending decision made by the model. Our framework earned the FICO recognition award for the Explainable Machine Learning Challenge, which was the first public challenge in the domain of explainable machine learning.

READ FULL TEXT
research
11/30/2018

An Interpretable Model with Globally Consistent Explanations for Credit Risk

We propose a possible solution to a public challenge posed by the Fair I...
research
05/17/2023

Unveiling the Potential of Counterfactuals Explanations in Employability

In eXplainable Artificial Intelligence (XAI), counterfactual explanation...
research
07/21/2020

Melody: Generating and Visualizing Machine Learning Model Summary to Understand Data and Classifiers Together

With the increasing sophistication of machine learning models, there are...
research
02/09/2023

Explaining with Greater Support: Weighted Column Sampling Optimization for q-Consistent Summary-Explanations

Machine learning systems have been extensively used as auxiliary tools i...
research
06/05/2019

Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning

Using machine learning in high-stakes applications often requires predic...
research
09/15/2020

Interpretable and Interactive Summaries of Actionable Recourses

As predictive models are increasingly being deployed in high-stakes deci...
research
05/23/2017

Towards Interrogating Discriminative Machine Learning Models

It is oftentimes impossible to understand how machine learning models re...

Please sign up or login with your details

Forgot password? Click here to reset