DeepAI AI Chat
Log In Sign Up

Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings

by   Ondřej Dušek, et al.

We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to n-gram-based scores while providing more relevant outputs.


page 1

page 2

page 3

page 4


Semantic Refinement GRU-based Neural Language Generation for Spoken Dialogue Systems

Natural language generation (NLG) plays a critical role in spoken dialog...

Shape of synth to come: Why we should use synthetic data for English surface realization

The Surface Realization Shared Tasks of 2018 and 2019 were Natural Langu...

A Context-aware Natural Language Generator for Dialogue Systems

We present a novel natural language generation system for spoken dialogu...

A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation

Natural language generation lies at the core of generative dialogue syst...

A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue Systems

User simulation is essential for generating enough data to train a stati...

Findings of the E2E NLG Challenge

This paper summarises the experimental setup and results of the first sh...

Efficient Natural Language Response Suggestion for Smart Reply

This paper presents a computationally efficient machine-learned method f...