TaDSE: Template-aware Dialogue Sentence Embeddings

05/23/2023
by   Minsik Oh, et al.
0

Learning high quality sentence embeddings from dialogues has drawn increasing attentions as it is essential to solve a variety of dialogue-oriented tasks with low annotation cost. However, directly annotating and gathering utterance relationships in conversations are difficult, while token-level annotations, , entities, slots and templates, are much easier to obtain. General sentence embedding methods are usually sentence-level self-supervised frameworks and cannot utilize token-level extra knowledge. In this paper, we introduce Template-aware Dialogue Sentence Embedding (TaDSE), a novel augmentation method that utilizes template information to effectively learn utterance representation via self-supervised contrastive learning framework. TaDSE augments each sentence with its corresponding template and then conducts pairwise contrastive learning over both sentence and template. We further enhance the effect with a synthetically augmented dataset that enhances utterance-template relation, in which entity detection (slot-filling) is a preliminary step. We evaluate TaDSE performance on five downstream benchmark datasets. The experiment results show that TaDSE achieves significant improvements over previous SOTA methods, along with a consistent Intent Classification task performance improvement margin. We further introduce a novel analytic instrument of Semantic Compression method, for which we discover a correlation with uniformity and alignment. Our code will be released soon.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2022

Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue

Context modeling plays a significant role in building multi-turn dialogu...
research
08/09/2023

Slot Induction via Pre-trained Language Model Probing and Multi-level Contrastive Learning

Recent advanced methods in Natural Language Understanding for Task-orien...
research
10/16/2022

Sentence Representation Learning with Generative Objective rather than Contrastive Objective

Though offering amazing contextualized token-level representations, curr...
research
06/30/2019

Self-Supervised Dialogue Learning

The sequential order of utterances is often meaningful in coherent dialo...
research
09/02/2021

Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning

Though language model text embeddings have revolutionized NLP research, ...
research
02/28/2022

A Mutually Reinforced Framework for Pretrained Sentence Embeddings

The lack of labeled data is a major obstacle to learning high-quality se...
research
11/29/2020

Generative Pre-training for Paraphrase Generation by Representing and Predicting Spans in Exemplars

Paraphrase generation is a long-standing problem and serves an essential...

Please sign up or login with your details

Forgot password? Click here to reset