Case-Based Reasoning with Language Models for Classification of Logical Fallacies

01/27/2023
by   Zhivar Sourati, et al.
0

The ease and the speed of spreading misinformation and propaganda on the Web motivate the need to develop trustworthy technology for detecting fallacies in natural language arguments. However, state-of-the-art language modeling methods exhibit a lack of robustness on tasks like logical fallacy classification that require complex reasoning. In this paper, we propose a Case-Based Reasoning method that classifies new cases of logical fallacy by language-modeling-driven retrieval and adaptation of historical cases. We design four complementary strategies to enrich the input representation for our model, based on external information about goals, explanations, counterarguments, and argument structure. Our experiments in in-domain and out-of-domain settings indicate that Case-Based Reasoning improves the accuracy and generalizability of language models. Our ablation studies confirm that the representations of similar cases have a strong impact on the model performance, that models perform well with fewer retrieved cases, and that the size of the case database has a negligible effect on the performance. Finally, we dive deeper into the relationship between the properties of the retrieved cases and the model performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2022

Logical Fallacy Detection

Reasoning is central to human intelligence. However, fallacious argument...
research
07/14/2022

Language models show human-like content effects on reasoning

Abstract reasoning is a key ability for an intelligent system. Large lan...
research
01/27/2022

Reasoning Like Program Executors

Reasoning over natural language is a long-standing goal for the research...
research
03/28/2023

Explicit Planning Helps Language Models in Logical Reasoning

Language models have been shown to perform remarkably well on a wide ran...
research
08/18/2023

How susceptible are LLMs to Logical Fallacies?

This paper investigates the rational thinking capability of Large Langua...
research
06/16/2023

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views

Large Language Models (LLMs) have achieved great success in various natu...
research
06/01/2023

Exposing Attention Glitches with Flip-Flop Language Modeling

Why do large language models sometimes output factual inaccuracies and e...

Please sign up or login with your details

Forgot password? Click here to reset