How to address monotonicity for model risk management?

04/28/2023
by   Dangxing Chen, et al.
0

In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity. As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models. As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2022

Generalized Gloves of Neural Additive Models: Pursuing transparent and accurate machine learning models in finance

For many years, machine learning methods have been used in a wide range ...
research
09/21/2022

Monotonic Neural Additive Models: Pursuing Regulated Machine Learning Models for Credit Scoring

The forecasting of credit default risk has been an active research field...
research
11/20/2020

Certified Monotonic Neural Networks

Learning monotonic models with respect to a subset of the inputs is a de...
research
01/26/2018

Transparent Model Distillation

Model distillation was originally designed to distill knowledge from a l...
research
07/14/2020

A Pairwise Fair and Community-preserving Approach to k-Center Clustering

Clustering is a foundational problem in machine learning with numerous a...
research
09/25/2018

A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees

Machine learning algorithms aim at minimizing the number of false decisi...

Please sign up or login with your details

Forgot password? Click here to reset