Few-Shot Table-to-Text Generation with Prompt-based Adapter

02/24/2023
by   Zhixin Guo, et al.
0

Pre-trained language models (PLMs) have made remarkable progress in table-to-text generation tasks. However, the topological gap between tabular data and text and the lack of domain-specific knowledge make it difficult for PLMs to produce faithful text, especially in real-world applications with limited resources. In this paper, we mitigate the above challenges by introducing a novel augmentation method: Prompt-based Adapter (PA), which targets table-to-text generation under few-shot conditions. The core insight design of the PA is to inject prompt templates for augmenting domain-specific knowledge and table-related representations into the model for bridging the structural gap between tabular data and descriptions through adapters. Such prompt-based knowledge augmentation method brings at least two benefits: (1) enables us to fully use the large amounts of unlabelled domain-specific knowledge, which can alleviate the PLMs' inherent shortcomings of lacking domain knowledge; (2) allows us to design different types of tasks supporting the generative challenge. Extensive experiments and analyses are conducted on three open-domain few-shot NLG datasets: Humans, Books, and Songs. Compared to previous state-of-the-art approaches, our model achieves superior performance in terms of both fluency and accuracy as judged by human and automatic evaluations.

READ FULL TEXT
research
02/09/2023

Few-Shot Table-to-Text Generation with Prompt Planning and Knowledge Memorization

Pre-trained language models (PLM) have achieved remarkable advancement i...
research
08/23/2022

Few-Shot Table-to-Text Generation with Prefix-Controlled Generator

Neural table-to-text generation approaches are data-hungry, limiting the...
research
08/27/2021

Few-Shot Table-to-Text Generation with Prototype Memory

Neural table-to-text generation models have achieved remarkable progress...
research
09/12/2023

Text Encoders Lack Knowledge: Leveraging Generative LLMs for Domain-Specific Semantic Textual Similarity

Amidst the sharp rise in the evaluation of large language models (LLMs) ...
research
03/16/2022

TegTok: Augmenting Text Generation via Task-specific and Open-world Knowledge

Generating natural and informative texts has been a long-standing proble...
research
12/01/2022

CliMedBERT: A Pre-trained Language Model for Climate and Health-related Text

Climate change is threatening human health in unprecedented orders and m...
research
09/06/2023

Knowledge Solver: Teaching LLMs to Search for Domain Knowledge from Knowledge Graphs

Large language models (LLMs), such as ChatGPT and GPT-4, are versatile a...

Please sign up or login with your details

Forgot password? Click here to reset