GenSF: Simultaneous Adaptation of Generative Pre-trained Models and Slot Filling

06/13/2021
by   Shikib Mehri, et al.
0

In transfer learning, it is imperative to achieve strong alignment between a pre-trained model and a downstream task. Prior work has done this by proposing task-specific pre-training objectives, which sacrifices the inherent scalability of the transfer learning paradigm. We instead achieve strong alignment by simultaneously modifying both the pre-trained model and the formulation of the downstream task, which is more efficient and preserves the scalability of transfer learning. We present GenSF (Generative Slot Filling), which leverages a generative pre-trained open-domain dialog model for slot filling. GenSF (1) adapts the pre-trained model by incorporating inductive biases about the task and (2) adapts the downstream task by reformulating slot filling to better leverage the pre-trained model's capabilities. GenSF achieves state-of-the-art results on two slot filling datasets with strong gains in few-shot and zero-shot settings. We achieve a 9 F1 score improvement in zero-shot slot filling. This highlights the value of strong alignment between the pre-trained model and the downstream task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2022

From Disfluency Detection to Intent Detection and Slot Filling

We present the first empirical study investigating the influence of disf...
research
12/12/2018

Recurrent Neural Networks with Pre-trained Language Model Embedding for Slot Filling Task

In recent years, Recurrent Neural Networks (RNNs) based models have been...
research
03/26/2023

Δ-Networks for Efficient Model Patching

Models pre-trained on large-scale datasets are often finetuned to suppor...
research
11/24/2020

Zero-Shot Visual Slot Filling as Question Answering

This paper presents a new approach to visual zero-shot slot filling. The...
research
08/30/2022

Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models

Deep generative models (DGMs) are data-eager. Essentially, it is because...
research
07/04/2023

Knowledge-Aware Audio-Grounded Generative Slot Filling for Limited Annotated Data

Manually annotating fine-grained slot-value labels for task-oriented dia...
research
03/24/2023

Toward Open-domain Slot Filling via Self-supervised Co-training

Slot filling is one of the critical tasks in modern conversational syste...

Please sign up or login with your details

Forgot password? Click here to reset