Self-Reinforcement Attention Mechanism For Tabular Learning

05/19/2023
by   Kodjo Mawuena Amekoe, et al.
0

Apart from the high accuracy of machine learning models, what interests many researchers in real-life problems (e.g., fraud detection, credit scoring) is to find hidden patterns in data; particularly when dealing with their challenging imbalanced characteristics. Interpretability is also a key requirement that needs to accompany the used machine learning model. In this concern, often, intrinsically interpretable models are preferred to complex ones, which are in most cases black-box models. Also, linear models are used in some high-risk fields to handle tabular data, even if performance must be sacrificed. In this paper, we introduce Self-Reinforcement Attention (SRA), a novel attention mechanism that provides a relevance of features as a weight vector which is used to learn an intelligible representation. This weight is then used to reinforce or reduce some components of the raw input through element-wise vector multiplication. Our results on synthetic and real-world imbalanced data show that our proposed SRA block is effective in end-to-end combination with baseline models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2020

Transparency, Auditability and eXplainability of Machine Learning Models in Credit Scoring

A major requirement for credit scoring models is to provide a maximally ...
research
06/27/2020

Causality Learning: A New Perspective for Interpretable Machine Learning

Recent years have witnessed the rapid growth of machine learning in a wi...
research
06/16/2016

Model-Agnostic Interpretability of Machine Learning

Understanding why machine learning models behave the way they do empower...
research
09/03/2021

Building Interpretable Models for Business Process Prediction using Shared and Specialised Attention Mechanisms

In this paper, we address the "black-box" problem in predictive process ...
research
06/18/2021

It's FLAN time! Summing feature-wise latent representations for interpretability

Interpretability has become a necessary feature for machine learning mod...
research
04/04/2022

Using Explainable Boosting Machine to Compare Idiographic and Nomothetic Approaches for Ecological Momentary Assessment Data

Previous research on EMA data of mental disorders was mainly focused on ...
research
09/24/2021

AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses

Deep-learning based Automatic Essay Scoring (AES) systems are being acti...

Please sign up or login with your details

Forgot password? Click here to reset