SDCUP: Schema Dependency-Enhanced Curriculum Pre-Training for Table Semantic Parsing

11/18/2021
by   Bowen Qin, et al.
0

Recently pre-training models have significantly improved the performance of various NLP tasks by leveraging large-scale text corpora to improve the contextual representation ability of the neural network. The large pre-training language model has also been applied in the area of table semantic parsing. However, existing pre-training approaches have not carefully explored explicit interaction relationships between a question and the corresponding database schema, which is a key ingredient for uncovering their semantic and structural correspondence. Furthermore, the question-aware representation learning in the schema grounding context has received less attention in pre-training objective.To alleviate these issues, this paper designs two novel pre-training objectives to impose the desired inductive bias into the learned representations for table pre-training. We further propose a schema-aware curriculum learning approach to mitigate the impact of noise and learn effectively from the pre-training data in an easy-to-hard manner. We evaluate our pre-trained framework by fine-tuning it on two benchmarks, Spider and SQUALL. The results demonstrate the effectiveness of our pre-training objective and curriculum compared to a variety of baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2020

Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

Most recently, there has been significant interest in learning contextua...
research
09/29/2020

GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing

We present GraPPa, an effective pre-training approach for table semantic...
research
12/15/2022

Efficient Pre-training of Masked Language Model via Concept-based Curriculum Masking

Masked language modeling (MLM) has been widely used for pre-training eff...
research
06/20/2018

Injecting Relational Structural Representation in Neural Networks for Question Similarity

Effectively using full syntactic parsing information in Neural Networks ...
research
10/21/2022

STAR: SQL Guided Pre-Training for Context-dependent Text-to-SQL Parsing

In this paper, we propose a novel SQL guided pre-training framework STAR...
research
06/26/2020

TURL: Table Understanding through Representation Learning

Relational tables on the Web store a vast amount of knowledge. Owing to ...
research
05/04/2023

2x Faster Language Model Pre-training via Masked Structural Growth

Acceleration of large language model pre-training is a critical issue in...

Please sign up or login with your details

Forgot password? Click here to reset