BERT-Assisted Semantic Annotation Correction for Emotion-Related Questions

04/02/2022
by   Abe Kazemzadeh, et al.
0

Annotated data have traditionally been used to provide the input for training a supervised machine learning (ML) model. However, current pre-trained ML models for natural language processing (NLP) contain embedded linguistic information that can be used to inform the annotation process. We use the BERT neural language model to feed information back into an annotation task that involves semantic labelling of dialog behavior in a question-asking game called Emotion Twenty Questions (EMO20Q). First we describe the background of BERT, the EMO20Q data, and assisted annotation tasks. Then we describe the methods for fine-tuning BERT for the purpose of checking the annotated labels. To do this, we use the paraphrase task as a way to check that all utterances with the same annotation label are classified as paraphrases of each other. We show this method to be an effective way to assess and revise annotations of textual user data with complex, utterance-level semantic labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2021

SINA-BERT: A pre-trained Language Model for Analysis of Medical Texts in Persian

We have released Sina-BERT, a language model pre-trained on BERT (Devlin...
research
05/18/2019

BERTSel: Answer Selection with Pre-trained Models

Recently, pre-trained models have been the dominant paradigm in natural ...
research
10/05/2020

Linguistic Profiling of a Neural Language Model

In this paper we investigate the linguistic knowledge learned by a Neura...
research
11/27/2021

Tapping BERT for Preposition Sense Disambiguation

Prepositions are frequently occurring polysemous words. Disambiguation o...
research
08/23/2021

Analyzing the Granularity and Cost of Annotation in Clinical Sequence Labeling

Well-annotated datasets, as shown in recent top studies, are becoming mo...
research
08/12/2019

Active Annotation: bootstrapping annotation lexicon and guidelines for supervised NLU learning

Natural Language Understanding (NLU) models are typically trained in a s...
research
05/29/2019

SECRET: Semantically Enhanced Classification of Real-world Tasks

Supervised machine learning (ML) algorithms are aimed at maximizing clas...

Please sign up or login with your details

Forgot password? Click here to reset