Simpler Context-Dependent Logical Forms via Model Projections

06/16/2016
by   Reginald Long, et al.
0

We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2018

Weakly-supervised Neural Semantic Parsing with a Generative Ranker

Weakly-supervised semantic parsers are trained on utterance-denotation p...
research
11/14/2017

Learning an Executable Neural Semantic Parser

This paper describes a neural semantic parser that maps natural language...
research
06/20/2016

Unanimous Prediction for 100 Semantic Mappings

Can we train a system that, on any new input, either says "don't know" o...
research
10/27/2019

Look-up and Adapt: A One-shot Semantic Parser

Computing devices have recently become capable of interacting with their...
research
07/25/2017

Macro Grammars and Holistic Triggering for Efficient Semantic Parsing

To learn a semantic parser from denotations, a learning algorithm must s...
research
08/26/2019

Don't paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing

A major hurdle on the road to conversational interfaces is the difficult...
research
04/18/2018

Learning to Map Context-Dependent Sentences to Executable Formal Queries

We propose a context-dependent model to map utterances within an interac...

Please sign up or login with your details

Forgot password? Click here to reset