Explaining Hate Speech Classification with Model Agnostic Methods

05/30/2023
by   Durgesh Nandini, et al.
0

There have been remarkable breakthroughs in Machine Learning and Artificial Intelligence, notably in the areas of Natural Language Processing and Deep Learning. Additionally, hate speech detection in dialogues has been gaining popularity among Natural Language Processing researchers with the increased use of social media. However, as evidenced by the recent trends, the need for the dimensions of explainability and interpretability in AI models has been deeply realised. Taking note of the factors above, the research goal of this paper is to bridge the gap between hate speech prediction and the explanations generated by the system to support its decision. This has been achieved by first predicting the classification of a text and then providing a posthoc, model agnostic and surrogate interpretability approach for explainability and to prevent model bias. The bidirectional transformer model BERT has been used for prediction because of its state of the art efficiency over other Machine Learning models. The model agnostic algorithm LIME generates explanations for the output of a trained classifier and predicts the features that influence the model decision. The predictions generated from the model were evaluated manually, and after thorough evaluation, we observed that the model performs efficiently in predicting and explaining its prediction. Lastly, we suggest further directions for the expansion of the provided research work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2021

Towards a Rigorous Evaluation of Explainability for Multivariate Time Series

Machine learning-based systems are rapidly gaining popularity and in-lin...
research
08/04/2023

Explaining Relation Classification Models with Semantic Extents

In recent years, the development of large pretrained language models, su...
research
02/11/2023

Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech

Recent studies have alarmed that many online hate speeches are implicit....
research
01/07/2020

RECAST: Interactive Auditing of Automatic Toxicity Detection Models

As toxic language becomes nearly pervasive online, there has been increa...
research
05/08/2023

XAI in Computational Linguistics: Understanding Political Leanings in the Slovenian Parliament

The work covers the development and explainability of machine learning m...
research
09/02/2023

Explainability for Large Language Models: A Survey

Large language models (LLMs) have demonstrated impressive capabilities i...
research
03/30/2022

Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning

Artificial intelligence, particularly through recent advancements in dee...

Please sign up or login with your details

Forgot password? Click here to reset