Generalized Gloves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance

09/21/2022
by   Dangxing Chen, et al.
0

For many years, machine learning methods have been used in a wide range of fields, including computer vision and natural language processing. While machine learning methods have significantly improved model performance over traditional methods, their black-box structure makes it difficult for researchers to interpret results. For highly regulated financial industries, transparency, explainability, and fairness are equally, if not more, important than accuracy. Without meeting regulated requirements, even highly accurate machine learning methods are unlikely to be accepted. We address this issue by introducing a novel class of transparent and interpretable machine learning algorithms known as generalized gloves of neural additive models. The generalized gloves of neural additive models separate features into three categories: linear features, individual nonlinear features, and interacted nonlinear features. Additionally, interactions in the last category are only local. The linear and nonlinear components are distinguished by a stepwise selection algorithm, and interacted groups are carefully verified by applying additive separation criteria. Empirical results demonstrate that generalized gloves of neural additive models provide optimal accuracy with the simplest architecture, allowing for a highly accurate, transparent, and explainable approach to machine learning.

READ FULL TEXT

page 1

page 9

research
09/21/2022

Monotonic Neural Additive Models: Pursuing Regulated Machine Learning Models for Credit Scoring

The forecasting of credit default risk has been an active research field...
research
04/28/2023

How to address monotonicity for model risk management?

In this paper, we study the problem of establishing the accountability a...
research
03/26/2015

Interpretable Classification Models for Recidivism Prediction

We investigate a long-debated question, which is how to create predictiv...
research
06/03/2021

NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning

Deployment of machine learning models in real high-risk settings (e.g. h...
research
07/09/2020

Making learning more transparent using conformalized performance prediction

In this work, we study some novel applications of conformal inference te...
research
09/21/2023

Regionally Additive Models: Explainable-by-design models minimizing feature interactions

Generalized Additive Models (GAMs) are widely used explainable-by-design...
research
11/11/2022

Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning

We introduce a family of interpretable machine learning models, with two...

Please sign up or login with your details

Forgot password? Click here to reset