Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings

06/17/2016
by   Ondřej Dušek, et al.
0

We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to n-gram-based scores while providing more relevant outputs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2017

Semantic Refinement GRU-based Neural Language Generation for Spoken Dialogue Systems

Natural language generation (NLG) plays a critical role in spoken dialog...
research
08/25/2016

A Context-aware Natural Language Generator for Dialogue Systems

We present a novel natural language generation system for spoken dialogu...
research
05/06/2020

Shape of synth to come: Why we should use synthetic data for English surface realization

The Surface Realization Shared Tasks of 2018 and 2019 were Natural Langu...
research
05/16/2018

A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation

Natural language generation lies at the core of generative dialogue syst...
research
06/30/2016

A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue Systems

User simulation is essential for generating enough data to train a stati...
research
10/02/2018

Findings of the E2E NLG Challenge

This paper summarises the experimental setup and results of the first sh...
research
05/01/2017

Efficient Natural Language Response Suggestion for Smart Reply

This paper presents a computationally efficient machine-learned method f...

Please sign up or login with your details

Forgot password? Click here to reset