ENTRUST: Argument Reframing with Language Models and Entailment

03/11/2021
by   Tuhin Chakrabarty, et al.
0

"Framing" involves the positive or negative presentation of an argument or issue depending on the audience and goal of the speaker (Entman 1983). Differences in lexical framing, the focus of our work, can have large effects on peoples' opinions and beliefs. To make progress towards reframing arguments for positive effects, we create a dataset and method for this task. We use a lexical resource for "connotations" to create a parallel corpus and propose a method for argument reframing that combines controllable text generation (positive connotation) with a post-decoding entailment component (same denotation). Our results show that our method is effective compared to strong baselines along the dimensions of fluency, meaning, and trustworthiness/reduction of fear.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/24/2020

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

Conditional text generation often requires lexical constraints, i.e., wh...
10/12/2020

Evaluating Factuality in Generation with Dependency-level Entailment

Despite significant progress in text generation models, a serious limita...
10/03/2021

Towards Understanding Persuasion in Computational Argumentation

Opinion formation and persuasion in argumentation are affected by three ...
12/02/2014

Tiered Clustering to Improve Lexical Entailment

Many tasks in Natural Language Processing involve recognizing lexical en...
05/25/2021

Argument Undermining: Counter-Argument Generation by Attacking Weak Premises

Text generation has received a lot of attention in computational argumen...
09/15/2020

Critical Thinking for Language Models

This paper takes a first step towards a critical thinking curriculum for...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.