Unanimous Prediction for 100 Semantic Mappings

06/20/2016
by   Fereshte Khani, et al.
0

Can we train a system that, on any new input, either says "don't know" or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is well-specified. Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output. We operationalize this principle for semantic parsing, the task of mapping utterances to logical forms. We develop a simple, efficient method that reasons over the infinite set of all consistent models by only checking two of the models. We prove that our method obtains 100 adversarial distribution. Empirically, we demonstrate the effectiveness of our approach on the standard GeoQuery dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/06/2019

Effective Search of Logical Forms for Weakly Supervised Knowledge-Based Question Answering

Many algorithms for Knowledge-Based Question Answering (KBQA) depend on ...
06/16/2016

Simpler Context-Dependent Logical Forms via Model Projections

We consider the task of learning a context-dependent mapping from uttera...
09/29/2016

Semantic Parsing with Semi-Supervised Sequential Autoencoders

We present a novel semi-supervised approach for sequence transduction an...
06/22/2016

Inferring Logical Forms From Denotations

A core problem in learning semantic parsers from denotations is picking ...
05/09/2017

Logical Parsing from Natural Language Based on a Neural Translation Model

Semantic parsing has emerged as a significant and powerful paradigm for ...
05/01/2020

Syntactic Question Abstraction and Retrieval for Data-Scarce Semantic Parsing

Deep learning approaches to semantic parsing require a large amount of l...
03/26/2018

code2vec: Learning Distributed Representations of Code

We present a neural model for representing snippets of code as continuou...