Non-Autoregressive Semantic Parsing for Compositional Task-Oriented Dialog

by   Arun Babu, et al.

Semantic parsing using sequence-to-sequence models allows parsing of deeper representations compared to traditional word tagging based models. In spite of these advantages, widespread adoption of these models for real-time conversational use cases has been stymied by higher compute requirements and thus higher latency. In this work, we propose a non-autoregressive approach to predict semantic parse trees with an efficient seq2seq model architecture. By combining non-autoregressive prediction with convolutional neural networks, we achieve significant latency gains and parameter size reduction compared to traditional RNN models. Our novel architecture achieves up to an 81 in latency on TOP dataset and retains competitive performance to non-pretrained models on three different semantic parsing datasets. Our code is available at


page 1

page 2

page 3

page 4


Semantic Parsing for Task Oriented Dialog using Hierarchical Representations

Task oriented dialog systems typically first parse user utterances to se...

ÚFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN

We present PERIN, a novel permutation-invariant approach to sentence-to-...

Improving Top-K Decoding for Non-Autoregressive Semantic Parsing via Intent Conditioning

Semantic parsing (SP) is a core component of modern virtual assistants l...

In-Order Chart-Based Constituent Parsing

We propose a novel in-order chart-based model for constituent parsing. C...

Semantic Parsing in Task-Oriented Dialog with Recursive Insertion-based Encoder

We introduce a Recursive INsertion-based Encoder (RINE), a novel approac...

SmBoP: Semi-autoregressive Bottom-up Semantic Parsing

The de-facto standard decoding method for semantic parsing in recent yea...

Evaluating Pretrained Transformer Models for Entity Linking in Task-Oriented Dialog

The wide applicability of pretrained transformer models (PTMs) for natur...