SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs

06/18/2021
by   Hieu Tran, et al.
0

In this paper, we propose SPBERT, a transformer-based language model pre-trained on massive SPARQL query logs. By incorporating masked language modeling objectives and the word structural objective, SPBERT can learn general-purpose representations in both natural language and SPARQL query language. We investigate how SPBERT and encoder-decoder architecture can be adapted for Knowledge-based QA corpora. We conduct exhaustive experiments on two additional tasks, including SPARQL Query Construction and Answer Verbalization Generation. The experimental results show that SPBERT can obtain promising results, achieving state-of-the-art BLEU scores on several of these tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2019

Unified Language Model Pre-training for Natural Language Understanding and Generation

This paper presents a new Unified pre-trained Language Model (UniLM) tha...
research
08/13/2019

StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding

Recently, the pre-trained language model, BERT (Devlin et al.(2018)Devli...
research
04/18/2022

LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking

Self-supervised pre-training techniques have achieved remarkable progres...
research
11/15/2021

Calculating Question Similarity is Enough:A New Method for KBQA Tasks

Knowledge Base Question Answering (KBQA) aims to answer natural language...
research
10/06/2020

Incorporating Behavioral Hypotheses for Query Generation

Generative neural networks have been shown effective on query suggestion...
research
10/19/2022

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

Multiple pre-training objectives fill the vacancy of the understanding c...
research
01/20/2022

LEMON: Language-Based Environment Manipulation via Execution-Guided Pre-training

Language-based environment manipulation requires agents to manipulate th...

Please sign up or login with your details

Forgot password? Click here to reset