Explainable Misinformation Detection Across Multiple Social Media Platforms

03/20/2022
by   Rahee Walambe, et al.
0

In this work, the integration of two machine learning approaches, namely domain adaptation and explainable AI, is proposed to address these two issues of generalized detection and explainability. Firstly the Domain Adversarial Neural Network (DANN) develops a generalized misinformation detector across multiple social media platforms DANN is employed to generate the classification results for test domains with relevant but unseen data. The DANN-based model, a traditional black-box model, cannot justify its outcome, i.e., the labels for the target domain. Hence a Local Interpretable Model-Agnostic Explanations (LIME) explainable AI model is applied to explain the outcome of the DANN mode. To demonstrate these two approaches and their integration for effective explainable generalized detection, COVID-19 misinformation is considered a case study. We experimented with two datasets, namely CoAID and MiSoVac, and compared results with and without DANN implementation. DANN significantly improves the accuracy measure F1 classification score and increases the accuracy and AUC performance. The results obtained show that the proposed framework performs well in the case of domain shift and can learn domain-invariant features while explaining the target labels with LIME implementation enabling trustworthy information processing and extraction to combat misinformation effectively.

READ FULL TEXT

page 13

page 17

research
06/26/2022

Explainable and High-Performance Hate and Offensive Speech Detection

The spread of information through social media platforms can create envi...
research
05/20/2022

Explainable Supervised Domain Adaptation

Domain adaptation techniques have contributed to the success of deep lea...
research
03/22/2023

Interpretable Bangla Sarcasm Detection using BERT and Explainable AI

A positive phrase or a sentence with an underlying negative motive is us...
research
02/15/2023

Streamlining models with explanations in the learning loop

Several explainable AI methods allow a Machine Learning user to get insi...
research
03/07/2023

SemEval-2023 Task 10: Explainable Detection of Online Sexism

Online sexism is a widespread and harmful phenomenon. Automated tools ca...
research
03/13/2020

Explainable Deep Classification Models for Domain Generalization

Conventionally, AI models are thought to trade off explainability for lo...
research
07/21/2020

Explainable Rumor Detection using Inter and Intra-feature Attention Networks

With social media becoming ubiquitous, information consumption from this...

Please sign up or login with your details

Forgot password? Click here to reset