Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)

01/25/2023
by   Daking Rai, et al.
0

While large language models (LLMs) have demonstrated strong capability in structured prediction tasks such as semantic parsing, few amounts of research have explored the underlying mechanisms of their success. Our work studies different methods for explaining an LLM-based semantic parser and qualitatively discusses the explained model behaviors, hoping to inspire future research toward better understanding them.

READ FULL TEXT

page 1

page 2

research
10/05/2018

TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation

We present TRANX, a transition-based neural semantic parser that maps na...
research
01/30/2023

On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex

Semantic parsing is a technique aimed at constructing a structured repre...
research
04/29/2022

Training Naturalized Semantic Parsers with Very Little Data

Semantic parsing is an important NLP problem, particularly for voice ass...
research
06/21/2022

BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic Parsing

We introduce BenchCLAMP, a Benchmark to evaluate Constrained LAnguage Mo...
research
05/05/2019

Explaining Cybersecurity with Films and the Arts (Extended Abstract)

Explaining Cybersecurity with Films and the Arts...
research
11/16/2022

Towards Computationally Verifiable Semantic Grounding for Language Models

The paper presents an approach to semantic grounding of language models ...
research
07/06/2023

Agentività e telicità in GilBERTo: implicazioni cognitive

The goal of this study is to investigate whether a Transformer-based neu...

Please sign up or login with your details

Forgot password? Click here to reset