Active Learning for New Domains in Natural Language Understanding

10/03/2018
by   Stanislav Peshterliev, et al.
0

We explore active learning (AL) utterance selection for improving the accuracy of new underrepresented domains in a natural language understanding (NLU) system. Moreover, we propose an AL algorithm called Majority-CRF that uses an ensemble of classification and sequence labeling models to guide utterance selection for annotation. Experiments with three domains show that Majority-CRF achieves 6.6 sampling with the same annotation budget, and statistically significant improvements compared to other AL approaches. Additionally, case studies with human-in-the-loop AL on six new domains show 4.6 NLU system.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2021

Bayesian Active Learning with Pretrained Language Models

Active Learning (AL) is a method to iteratively select data for annotati...
research
08/12/2019

Active Annotation: bootstrapping annotation lexicon and guidelines for supervised NLU learning

Natural Language Understanding (NLU) models are typically trained in a s...
research
05/14/2020

VirAAL: Virtual Adversarial Active Learning

This paper presents VirAAL, an Active Learning framework based on Advers...
research
04/22/2018

A Scalable Neural Shortlisting-Reranking Approach for Large-Scale Domain Classification in Natural Language Understanding

Intelligent personal digital assistants (IPDAs), a popular real-life app...
research
05/24/2023

Active Learning for Natural Language Generation

The field of text generation suffers from a severe shortage of labeled d...
research
02/01/2022

Active Learning Over Multiple Domains in Natural Language Tasks

Studies of active learning traditionally assume the target and source da...
research
10/12/2021

AutoNLU: Detecting, root-causing, and fixing NLU model errors

Improving the quality of Natural Language Understanding (NLU) models, an...

Please sign up or login with your details

Forgot password? Click here to reset