Multi-domain Neural Network Language Generation for Spoken Dialogue Systems

03/03/2016
by   Tsung-Hsien Wen, et al.
0

Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2018

Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems

Domain Adaptation arises when we aim at learning from source domain a mo...
research
10/02/2019

Tree-Structured Semantic Encoder with Knowledge Sharing for Domain Adaptation in Natural Language Generation

Domain adaptation in natural language generation (NLG) remains challengi...
research
06/01/2017

Semantic Refinement GRU-based Neural Language Generation for Spoken Dialogue Systems

Natural language generation (NLG) plays a critical role in spoken dialog...
research
03/03/2020

Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation

Domain adaptation has recently become a key problem in dialogue systems ...
research
04/01/2016

Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding

The goal of this paper is to use multi-task learning to efficiently scal...
research
12/12/2018

Recurrent Neural Networks for Fuzz Testing Web Browsers

Generation-based fuzzing is a software testing approach which is able to...
research
11/01/2018

Progressive Memory Banks for Incremental Domain Adaptation

This paper addresses the problem of incremental domain adaptation (IDA)....

Please sign up or login with your details

Forgot password? Click here to reset