GRASP: Guiding model with RelAtional Semantics using Prompt for Dialogue Relation Extraction

08/26/2022
by   Junyoung Son, et al.
0

The dialogue-based relation extraction (DialogRE) task aims to predict the relations between argument pairs that appear in dialogue. Most previous studies utilize fine-tuning pre-trained language models (PLMs) only with extensive features to supplement the low information density of the dialogue by multiple speakers. To effectively exploit inherent knowledge of PLMs without extra layers and consider scattered semantic cues on the relation between the arguments, we propose a Guiding model with RelAtional Semantics using Prompt (GRASP). We adopt a prompt-based fine-tuning approach and capture relational semantic clues of a given dialogue with 1) an argument-aware prompt marker strategy and 2) the relational clue detection task. In the experiments, GRASP achieves state-of-the-art performance in terms of both F1 and F1c scores on a DialogRE dataset even though our method only leverages PLMs without adding any extra layers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2023

TLAG: An Informative Trigger and Label-Aware Knowledge Guided Model for Dialogue-based Relation Extraction

Dialogue-based Relation Extraction (DRE) aims to predict the relation ty...
research
06/19/2019

Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction

Distantly supervised relation extraction is widely used to extract relat...
research
09/09/2021

Graph Based Network with Contextualized Representations of Turns in Dialogue

Dialogue-based relation extraction (RE) aims to extract relation(s) betw...
research
08/31/2021

TREND: Trigger-Enhanced Relation-Extraction Network for Dialogues

The goal of dialogue relation extraction (DRE) is to identify the relati...
research
08/08/2023

DialogRE^C+: An Extension of DialogRE to Investigate How Much Coreference Helps Relation Extraction in Dialogs

Dialogue relation extraction (DRE) that identifies the relations between...
research
05/08/2023

GersteinLab at MEDIQA-Chat 2023: Clinical Note Summarization from Doctor-Patient Conversations through Fine-tuning and In-context Learning

This paper presents our contribution to the MEDIQA-2023 Dialogue2Note sh...
research
05/11/2022

Pre-trained Language Models as Re-Annotators

Annotation noise is widespread in datasets, but manually revising a flaw...

Please sign up or login with your details

Forgot password? Click here to reset