Neural Machine Translation for Query Construction and Composition

by   Tommaso Soru, et al.

Research on question answering with knowledge base has recently seen an increasing use of deep architectures. In this extended abstract, we study the application of the neural machine translation paradigm for question parsing. We employ a sequence-to-sequence model to learn graph patterns in the SPARQL graph query language and their compositions. Instead of inducing the programs through question-answer pairs, we expect a semi-supervised approach, where alignments between questions and queries are built through templates. We argue that the coverage of language utterances can be expanded using late notable works in natural language generation.


page 1

page 2

page 3


Exploring Sequence-to-Sequence Models for SPARQL Pattern Composition

A booming amount of information is continuously added to the Internet as...

Question Answering over Knowledge Graphs with Neural Machine Translation and Entity Linking

The goal of Question Answering over Knowledge Graphs (KGQA) is to find a...

Reducing the impact of out of vocabulary words in the translation of natural language questions into SPARQL queries

Accessing the large volumes of information available in public knowledge...

A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL

The sequence-to-sequence (seq2seq) model for neural machine translation ...

Towards Natural Language Question Answering over Earth Observation Linked Data using Attention-based Neural Machine Translation

With an increase in Geospatial Linked Open Data being adopted and publis...

Neural Machine Translating from Natural Language to SPARQL

SPARQL is a highly powerful query language for an ever-growing number of...

Crake: Causal-Enhanced Table-Filler for Question Answering over Large Scale Knowledge Base

Semantic parsing solves knowledge base (KB) question answering (KBQA) by...