Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking

11/17/2022
by   Jihyun Lee, et al.
0

In dialogue state tracking (DST), labeling the dataset involves considerable human labor. We propose a new self-training framework for few-shot generative DST that utilize unlabeled data. Our self-training method iteratively improves the model by pseudo labeling and employs Purpose Preserving Augmentation (PPAug) to prevent overfitting. We increaese the few-shot 10 approximately 4 values compared to baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking

Few-shot dialogue state tracking (DST) is a realistic problem that train...
research
09/16/2022

SF-DST: Few-Shot Self-Feeding Reading Comprehension Dialogue State Tracking with Auxiliary Task

Few-shot dialogue state tracking (DST) model tracks user requests in dia...
research
02/05/2022

LST: Lexicon-Guided Self-Training for Few-Shot Text Classification

Self-training provides an effective means of using an extremely small am...
research
08/28/2021

Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems

As the labeling cost for different modules in task-oriented dialog (ToD)...
research
01/15/2022

Prompt Learning for Few-Shot Dialogue State Tracking

Collecting dialogue state labels, slots and values, for learning dialogu...
research
10/04/2021

Revisiting Self-Training for Few-Shot Learning of Language Model

As unlabeled data carry rich task-relevant information, they are proven ...

Please sign up or login with your details

Forgot password? Click here to reset