Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

11/27/2020
by   Manuel Francisco, et al.
0

Social Networking Sites (SNS) are one of the most important ways of communication. In particular, microblogging sites are being used as analysis avenues due to their peculiarities (promptness, short texts...). There are countless researches that use SNS in novel manners, but machine learning (ML) has focused mainly in classification performance rather than interpretability and/or other goodness metrics. Thus, state-of-the-art models are black boxes that should not be used to solve problems that may have a social impact. When the problem requires transparency, it is necessary to build interpretable pipelines. Arguably, the most decisive component in the pipeline is the classifier, but it is not the only thing that we need to consider. Despite that the classifier may be interpretable, resulting models are too complex to be considered comprehensible, making it impossible for humans to comprehend the actual decisions. The purpose of this paper is to present a feature selection mechanism (the first step in the pipeline) that is able to improve comprehensibility by using less but more meaningful features while achieving a good performance in microblogging contexts where interpretability is mandatory. Moreover, we present a ranking method to evaluate features in terms of statistical relevance and bias. We conducted exhaustive tests with five different datasets in order to evaluate classification performance, generalisation capacity and actual interpretability of the model. Our results shows that our proposal is better and, by far, the most stable in terms of accuracy, generalisation and comprehensibility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2023

Cross Feature Selection to Eliminate Spurious Interactions and Single Feature Dominance Explainable Boosting Machines

Interpretability is a crucial aspect of machine learning models that ena...
research
11/03/2021

On the Effectiveness of Interpretable Feedforward Neural Network

Deep learning models have achieved state-of-the-art performance in many ...
research
11/24/2019

A psychophysics approach for quantitative comparison of interpretable computer vision models

The field of transparent Machine Learning (ML) has contributed many nove...
research
07/07/2021

Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification

Machine learning solutions for pattern classification problems are nowad...
research
02/25/2021

On Interpretability and Similarity in Concept-Based Machine Learning

Machine Learning (ML) provides important techniques for classification a...
research
07/13/2020

An Interpretable Baseline for Time Series Classification Without Intensive Learning

Recent advances in time series classification have largely focused on me...
research
10/22/2018

Assessing the Stability of Interpretable Models

Interpretable classification models are built with the purpose of provid...

Please sign up or login with your details

Forgot password? Click here to reset