DeepAI
Log In Sign Up

Explainable Machine Learning for Fraud Detection

05/13/2021
by   Ismini Psychoula, et al.
33

The application of machine learning to support the processing of large datasets holds promise in many industries, including financial services. However, practical issues for the full adoption of machine learning remain with the focus being on understanding and being able to explain the decisions and predictions made by complex models. In this paper, we explore explainability methods in the domain of real-time fraud detection by investigating the selection of appropriate background datasets and runtime trade-offs on both supervised and unsupervised models.

READ FULL TEXT
10/29/2019

How Much Can We See? A Note on Quantifying Explainability of Machine Learning Models

One of the most popular approaches to understanding feature effects of m...
11/07/2017

Quality-Efficiency Trade-offs in Machine Learning for Text Processing

Data mining, machine learning, and natural language processing are power...
01/23/2023

Towards Modular Machine Learning Solution Development: Benefits and Trade-offs

Machine learning technologies have demonstrated immense capabilities in ...
03/24/2020

Towards Explainability of Machine Learning Models in Insurance Pricing

Machine learning methods have garnered increasing interest among actuari...
05/05/2021

Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning

Explainable machine learning has become increasingly prevalent, especial...
10/18/2018

Entropic Variable Boosting for Explainability and Interpretability in Machine Learning

In this paper, we present a new explainability formalism to make clear t...
09/21/2021

The Trade-offs of Domain Adaptation for Neural Language Models

In this paper, we connect language model adaptation with concepts of mac...