ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large Language Models

12/21/2022
by   Dheeraj Mekala, et al.
1

We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse  16 of utterances in the MTOP dataset without requiring any annotated data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2021

El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic Parsing

Being able to parse code-switched (CS) utterances, such as Spanish+Engli...
research
05/26/2023

Zero is Not Hero Yet: Benchmarking Zero-Shot Performance of LLMs for Financial Tasks

Recently large language models (LLMs) like ChatGPT have shown impressive...
research
05/15/2022

SeqZero: Few-shot Compositional Semantic Parsing with Sequential Prompts and Zero-shot Models

Recent research showed promising results on combining pretrained languag...
research
11/20/2018

QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships

Many natural language questions require recognizing and reasoning with q...
research
05/01/2023

Evaluating statistical language models as pragmatic reasoners

The relationship between communicated language and intended meaning is o...
research
09/17/2020

Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction

How meaning is represented in the brain is still one of the big open que...
research
05/04/2022

Compositional Task-Oriented Parsing as Abstractive Question Answering

Task-oriented parsing (TOP) aims to convert natural language into machin...

Please sign up or login with your details

Forgot password? Click here to reset