How susceptible are LLMs to Logical Fallacies?

08/18/2023
by   Amirreza Payandeh, et al.
1

This paper investigates the rational thinking capability of Large Language Models (LLMs) in multi-round argumentative debates by exploring the impact of fallacious arguments on their logical reasoning performance. More specifically, we present Logic Competence Measurement Benchmark (LOGICOM), a diagnostic benchmark to assess the robustness of LLMs against logical fallacies. LOGICOM involves two agents: a persuader and a debater engaging in a multi-round debate on a controversial topic, where the persuader tries to convince the debater of the correctness of its claim. First, LOGICOM assesses the potential of LLMs to change their opinions through reasoning. Then, it evaluates the debater's performance in logical reasoning by contrasting the scenario where the persuader employs logical fallacies against one where logical reasoning is used. We use this benchmark to evaluate the performance of GPT-3.5 and GPT-4 using a dataset containing controversial topics, claims, and reasons supporting them. Our findings indicate that both GPT-3.5 and GPT-4 can adjust their opinion through reasoning. However, when presented with logical fallacies, GPT-3.5 and GPT-4 are erroneously convinced 41 respectively, compared to when logical reasoning is used. Finally, we introduce a new dataset containing over 5k pairs of logical vs. fallacious arguments. The source code and dataset of this work are made publicly available.

READ FULL TEXT

page 2

page 3

page 4

page 10

page 11

page 13

research
02/28/2022

Logical Fallacy Detection

Reasoning is central to human intelligence. However, fallacious argument...
research
05/25/2022

RobustLR: Evaluating Robustness to Logical Perturbation in Deductive Reasoning

Transformers have been shown to be able to perform deductive reasoning o...
research
05/22/2023

Teaching Probabilistic Logical Reasoning to Transformers

Recent research on transformer-based language models investigates their ...
research
01/27/2023

Case-Based Reasoning with Language Models for Classification of Logical Fallacies

The ease and the speed of spreading misinformation and propaganda on the...
research
10/22/2022

MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure

In this paper, we propose a comprehensive benchmark to investigate model...
research
03/28/2022

LogicInference: A New Dataset for Teaching Logical Inference to seq2seq Models

Machine learning models such as Transformers or LSTMs struggle with task...
research
03/06/2013

Dialectic Reasoning with Inconsistent Information

From an inconsistent database non-trivial arguments may be constructed b...

Please sign up or login with your details

Forgot password? Click here to reset