Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing

09/22/2021
by   Qian Liu, et al.
0

Recent years pretrained language models (PLMs) hit a success on several downstream tasks, showing their power on modeling language. To better understand and leverage what PLMs have learned, several techniques have emerged to explore syntactic structures entailed by PLMs. However, few efforts have been made to explore grounding capabilities of PLMs, which are also essential. In this paper, we highlight the ability of PLMs to discover which token should be grounded to which concept, if combined with our proposed erasing-then-awakening approach. Empirical studies on four datasets demonstrate that our approach can awaken latent grounding which is understandable to human experts, even if it is not exposed to such labels during training. More importantly, our approach shows great potential to benefit downstream semantic parsing models. Taking text-to-SQL as a case study, we successfully couple our approach with two off-the-shelf parsers, obtaining an absolute improvement of up to 9.8

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2021

How much pretraining data do language models need to learn syntax?

Transformers-based pretrained language models achieve outstanding result...
research
03/05/2022

Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models

Semantic parsing is a key NLP task that maps natural language to structu...
research
04/06/2020

"You are grounded!": Latent Name Artifacts in Pre-trained Language Models

Pre-trained language models (LMs) may perpetuate biases originating in t...
research
04/12/2021

On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies

We study how masking and predicting tokens in an unsupervised fashion ca...
research
07/17/2023

GEAR: Augmenting Language Models with Generalizable and Efficient Tool Resolution

Augmenting large language models (LLM) to use external tools enhances th...
research
05/22/2023

"According to ..." Prompting Language Models Improves Quoting from Pre-Training Data

Large Language Models (LLMs) may hallucinate and generate fake informati...
research
11/16/2022

Towards Computationally Verifiable Semantic Grounding for Language Models

The paper presents an approach to semantic grounding of language models ...

Please sign up or login with your details

Forgot password? Click here to reset