DeepAI
Log In Sign Up

Constrained Language Models Yield Few-Shot Semantic Parsers

04/18/2021
by   Richard Shin, et al.
9

We explore the use of large pretrained language models as few-shot semantic parsers. The goal in semantic parsing is to generate a structured meaning representation given a natural language input. However, language models are trained to generate natural language. To bridge the gap, we use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation. With a small amount of data and very little code to convert into English-like representations, we provide a blueprint for rapidly bootstrapping semantic parsers and demonstrate good performance on multiple tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/16/2021

Few-Shot Semantic Parsing with Language Models Trained On Code

Large language models, prompted with in-context examples, can perform se...
04/29/2022

Training Naturalized Semantic Parsers with Very Little Data

Semantic parsing is an important NLP problem, particularly for voice ass...
05/17/2020

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

Recent years have witnessed the burgeoning of pretrained language models...
02/15/2021

Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm

Prevailing methods for mapping large generative language models to super...
04/30/2015

Texts in, meaning out: neural language models in semantic similarity task for Russian

Distributed vector representations for natural language vocabulary get a...
10/14/2022

Transparency Helps Reveal When Language Models Learn Meaning

Many current NLP systems are built from language models trained to optim...
09/02/2021

Do Prompt-Based Models Really Understand the Meaning of their Prompts?

Recently, a boom of papers have shown extraordinary progress in few-shot...