DeepAI AI Chat
Log In Sign Up

mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot Filling

by   Seong-Hwan Heo, et al.

Zero-shot slot filling has received considerable attention to cope with the problem of limited available data for the target domain. One of the important factors in zero-shot learning is to make the model learn generalized and reliable representations. For this purpose, we present mcBERT, which stands for momentum contrastive learning with BERT, to develop a robust zero-shot slot filling model. mcBERT uses BERT to initialize the two encoders, the query encoder and key encoder, and is trained by applying momentum contrastive learning. Our experimental results on the SNIPS benchmark show that mcBERT substantially outperforms the previous models, recording a new state-of-the-art. Besides, we also show that each component composing mcBERT contributes to the performance improvement.


page 1

page 2

page 3

page 4


Zero-Shot Visual Slot Filling as Question Answering

This paper presents a new approach to visual zero-shot slot filling. The...

Linguistically-Enriched and Context-Aware Zero-shot Slot Filling

Slot filling is identifying contiguous spans of words in an utterance th...

Toward Open-domain Slot Filling via Self-supervised Co-training

Slot filling is one of the critical tasks in modern conversational syste...

Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT

Moreover, GPT-based zero-shot classification models tend to make indepen...

How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval

Various techniques have been developed in recent years to improve dense ...