The Power of Prompt Tuning for Low-Resource Semantic Parsing

10/16/2021
by   Nathan Schucher, et al.
0

Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language tasks. In this paper, we investigate prompt tuning for semantic parsing, the task of mapping natural language utterances onto formal meaning representations. For large T5 models we find (i) that prompt tuning significantly outperforms fine-tuning in the low data regime and (ii) that canonicalization – i.e. naturalizing the meaning representations – barely improves performance. This last result is surprising as it suggests that large T5 models can be modulated to generate sequences that are far from the pre-training distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2021

Few-Shot Semantic Parsing with Language Models Trained On Code

Large language models, prompted with in-context examples, can perform se...
research
05/12/2018

Coarse-to-Fine Decoding for Neural Semantic Parsing

Semantic parsing aims at mapping natural language utterances into struct...
research
04/15/2021

Low-Resource Task-Oriented Semantic Parsing via Intrinsic Modeling

Task-oriented semantic parsing models typically have high resource requi...
research
10/15/2020

Continual Learning for Neural Semantic Parsing

A semantic parsing model is crucial to natural language processing appli...
research
07/15/2022

Probing Semantic Grounding in Language Models of Code with Representational Similarity Analysis

Representational Similarity Analysis is a method from cognitive neurosci...
research
05/31/2023

Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation

Pre-trained language models (PLMs) have achieved great success in NLP an...
research
04/16/2021

Is Your Language Model Ready for Dense Representation Fine-tuning?

Pre-trained language models (LM) have become go-to text representation e...

Please sign up or login with your details

Forgot password? Click here to reset