The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments

03/28/2022
by   Milad Alshomary, et al.
0

An audience's prior beliefs and morals are strong indicators of how likely they will be affected by a given argument. Utilizing such knowledge can help focus on shared values to bring disagreeing parties towards agreement. In argumentation technology, however, this is barely exploited so far. This paper studies the feasibility of automatically generating morally framed arguments as well as their effect on different audiences. Following the moral foundation theory, we propose a system that effectively generates arguments focusing on different morals. In an in-depth user study, we ask liberals and conservatives to evaluate the impact of these arguments. Our results suggest that, particularly when prior beliefs are challenged, an audience becomes more affected by morally framed arguments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2021

Towards Understanding Persuasion in Computational Argumentation

Opinion formation and persuasion in argumentation are affected by three ...
research
01/24/2021

Belief-based Generation of Argumentative Claims

When engaging in argumentative discourse, skilled human debaters tailor ...
research
08/17/2023

Fostering User Engagement in the Critical Reflection of Arguments

A natural way to resolve different points of view and form opinions is t...
research
01/28/2021

Strategic Argumentation Dialogues for Persuasion: Framework and Experiments Based on Modelling the Beliefs and Concerns of the Persuadee

Persuasion is an important and yet complex aspect of human intelligence....
research
06/30/2011

Graduality in Argumentation

Argumentation is based on the exchange and valuation of interacting argu...
research
05/31/2017

The Singularity May Be Near

Toby Walsh in 'The Singularity May Never Be Near' gives six arguments to...
research
06/30/2023

Still No Lie Detector for Language Models: Probing Empirical and Conceptual Roadblocks

We consider the questions of whether or not large language models (LLMs)...

Please sign up or login with your details

Forgot password? Click here to reset