Prediction Uncertainty Estimation for Hate Speech Classification

09/16/2019
by   Kristian Miok, et al.
0

As a result of social network popularity, in recent years, hate speech phenomenon has significantly increased. Due to its harmful effect on minority groups as well as on large communities, there is a pressing need for hate speech detection and filtering. However, automatic approaches shall not jeopardize free speech, so they shall accompany their decisions with explanations and assessment of uncertainty. Thus, there is a need for predictive machine learning models that not only detect hate speech but also help users understand when texts cross the line and become unacceptable. The reliability of predictions is usually not addressed in text classification. We fill this gap by proposing the adaptation of deep neural networks that can efficiently estimate prediction uncertainty. To reliably detect hate speech, we use Monte Carlo dropout regularization, which mimics Bayesian inference within neural networks. We evaluate our approach using different text embedding methods. We visualize the reliability of results with a novel technique that aids in understanding the classification reliability and errors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2020

To BAN or not to BAN: Bayesian Attention Networks for Reliable Hate Speech Detection

Hate speech is an important problem in the management of user-generated ...
research
01/02/2019

Can You Trust This Prediction? Auditing Pointwise Reliability After Learning

To use machine learning in high stakes applications (e.g. medicine), we ...
research
01/02/2019

Auditing Pointwise Reliability Subsequent to Training

To use machine learning in high stakes applications (e.g. medicine), we ...
research
05/15/2023

Integrating Uncertainty into Neural Network-based Speech Enhancement

Supervised masking approaches in the time-frequency domain aim to employ...
research
08/20/2019

n-MeRCI: A new Metric to Evaluate the Correlation Between Predictive Uncertainty and True Error

As deep learning applications are becoming more and more pervasive in ro...
research
01/27/2017

Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis

Some users of social media are spreading racist, sexist, and otherwise h...
research
02/17/2021

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Interpretability methods aim to help users build trust in and understand...

Please sign up or login with your details

Forgot password? Click here to reset