Learning Structured Natural Language Representations for Semantic Parsing

04/27/2017
by   Jianpeng Cheng, et al.
0

We introduce a neural semantic parser that converts natural language utterances to intermediate representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We obtain competitive results on various datasets. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2019

Transductive Parsing for Universal Decompositional Semantics

We introduce a transductive model for parsing into Universal Decompositi...
research
07/10/2019

Semantic Parsing with Dual Learning

Semantic parsing converts natural language queries into structured logic...
research
10/05/2018

TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation

We present TRANX, a transition-based neural semantic parser that maps na...
research
10/04/2022

Guiding the PLMs with Semantic Anchors as Intermediate Supervision: Towards Interpretable Semantic Parsing

The recent prevalence of pretrained language models (PLMs) has dramatica...
research
01/28/2023

Underwater Robotics Semantic Parser Assistant

Semantic parsing is a means of taking natural language and putting it in...
research
01/10/2019

Sentence Rewriting for Semantic Parsing

A major challenge of semantic parsing is the vocabulary mismatch problem...
research
05/12/2018

Coarse-to-Fine Decoding for Neural Semantic Parsing

Semantic parsing aims at mapping natural language utterances into struct...

Please sign up or login with your details

Forgot password? Click here to reset