Arguments to Key Points Mapping with Prompt-based Learning

11/28/2022
by   Ahnaf Mozib Samin, et al.
0

Handling and digesting a huge amount of information in an efficient manner has been a long-term demand in modern society. Some solutions to map key points (short textual summaries capturing essential information and filtering redundancies) to a large number of arguments/opinions have been provided recently (Bar-Haim et al., 2020). To complement the full picture of the argument-to-keypoint mapping task, we mainly propose two approaches in this paper. The first approach is to incorporate prompt engineering for fine-tuning the pre-trained language models (PLMs). The second approach utilizes prompt-based learning in PLMs to generate intermediary texts, which are then combined with the original argument-keypoint pairs and fed as inputs to a classifier, thereby mapping them. Furthermore, we extend the experiments to cross/in-domain to conduct an in-depth analysis. In our evaluation, we find that i) using prompt engineering in a more direct way (Approach 1) can yield promising results and improve the performance; ii) Approach 2 performs considerably worse than Approach 1 due to the negation issue of the PLM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2020

Quantitative Argument Summarization and Beyond: Cross-Domain Key Point Analysis

When summarizing a collection of views, arguments or opinions on some to...
research
05/04/2020

From Arguments to Key Points: Towards Automatic Argument Summarization

Generating a concise summary from a large collection of arguments on a g...
research
10/24/2021

Team Enigma at ArgMining-EMNLP 2021: Leveraging Pre-trained Language Models for Key Point Matching

We present the system description for our submission towards the Key Poi...
research
05/25/2023

Do You Hear The People Sing? Key Point Analysis via Iterative Clustering and Abstractive Summarisation

Argument summarisation is a promising but currently under-explored field...
research
09/29/2022

Bidirectional Language Models Are Also Few-shot Learners

Large language models such as GPT-3 (Brown et al., 2020) can perform arb...
research
10/04/2021

DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models

In this paper, we present and implement a multi-dimensional, modular fra...
research
05/08/2023

Prompted LLMs as Chatbot Modules for Long Open-domain Conversation

In this paper, we propose MPC (Modular Prompted Chatbot), a new approach...

Please sign up or login with your details

Forgot password? Click here to reset