Compositional Task-Oriented Parsing as Abstractive Question Answering

05/04/2022
by   Wenting Zhao, et al.
0

Task-oriented parsing (TOP) aims to convert natural language into machine-readable representations of specific tasks, such as setting an alarm. A popular approach to TOP is to apply seq2seq models to generate linearized parse trees. A more recent line of work argues that pretrained seq2seq models are better at generating outputs that are themselves natural language, so they replace linearized parse trees with canonical natural-language paraphrases that can then be easily translated into parse trees, resulting in so-called naturalized parsers. In this work we continue to explore naturalized semantic parsing by presenting a general reduction of TOP to abstractive question answering that overcomes some limitations of canonical paraphrasing. Experimental results show that our QA-based technique outperforms state-of-the-art methods in full-data settings while achieving dramatic improvements in few-shot settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2018

The Natural Language Decathlon: Multitask Learning as Question Answering

Deep learning has improved performance on many natural language processi...
research
04/16/2020

A Methodology for Creating Question Answering Corpora Using Inverse Data Annotation

In this paper, we introduce a novel methodology to efficiently construct...
research
12/21/2022

ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large Language Models

We explore the use of large language models (LLMs) for zero-shot semanti...
research
04/29/2022

Training Naturalized Semantic Parsers with Very Little Data

Semantic parsing is an important NLP problem, particularly for voice ass...
research
01/30/2022

Compositionality as Lexical Symmetry

Standard deep network models lack the inductive biases needed to general...
research
09/10/2017

Abductive Matching in Question Answering

We study question-answering over semi-structured data. We introduce a ne...
research
10/12/2021

AutoNLU: Detecting, root-causing, and fixing NLU model errors

Improving the quality of Natural Language Understanding (NLU) models, an...

Please sign up or login with your details

Forgot password? Click here to reset