Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments

12/12/2022
by   Zhivar Sourati, et al.
0

The spread of misinformation, propaganda, and flawed argumentation has been amplified in the Internet era. Given the volume of data and the subtlety of identifying violations of argumentation norms, supporting information analytics tasks, like content moderation, with trustworthy methods that can identify logical fallacies is essential. In this paper, we formalize prior theoretical work on logical fallacies into a comprehensive three-stage evaluation framework of detection, coarse-grained, and fine-grained classification. We adapt existing evaluation datasets for each stage of the evaluation. We devise three families of robust and explainable methods based on prototype reasoning, instance-based reasoning, and knowledge injection. The methods are designed to combine language models with background knowledge and explainable mechanisms. Moreover, we address data sparsity with strategies for data augmentation and curriculum learning. Our three-stage framework natively consolidates prior datasets and methods from existing tasks, like propaganda detection, serving as an overarching evaluation testbed. We extensively evaluate these methods on our datasets, focusing on their robustness and explainability. Our results provide insight into the strengths and weaknesses of the methods on different components and fallacy classes, indicating that fallacy identification is a challenging task that may require specialized forms of reasoning to capture various classes. We share our open-source code and data on GitHub to support further work on logical fallacy identification.

READ FULL TEXT

page 13

page 20

research
10/24/2020

Abduction and Argumentation for Explainable Machine Learning: A Position Survey

This paper presents Abduction and Argumentation as two principled forms ...
research
10/22/2022

MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure

In this paper, we propose a comprehensive benchmark to investigate model...
research
09/16/2021

Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings

Natural language inference (NLI) requires models to learn and apply comm...
research
03/29/2022

Fine-Grained Visual Entailment

Visual entailment is a recently proposed multimodal reasoning task where...
research
02/29/2016

Range-based argumentation semantics as 2-valued models

Characterizations of semi-stable and stage extensions in terms of 2-valu...
research
04/24/2023

Beyond the Prior Forgery Knowledge: Mining Critical Clues for General Face Forgery Detection

Face forgery detection is essential in combating malicious digital face ...
research
12/11/2022

Multimodal and Explainable Internet Meme Classification

Warning: this paper contains content that may be offensive or upsetting....

Please sign up or login with your details

Forgot password? Click here to reset