C3: Continued Pretraining with Contrastive Weak Supervision for Cross Language Ad-Hoc Retrieval

04/25/2022
by   Eugene Yang, et al.
3

Pretrained language models have improved effectiveness on numerous tasks, including ad-hoc retrieval. Recent work has shown that continuing to pretrain a language model with auxiliary objectives before fine-tuning on the retrieval task can further improve retrieval effectiveness. Unlike monolingual retrieval, designing an appropriate auxiliary task for cross-language mappings is challenging. To address this challenge, we use comparable Wikipedia articles in different languages to further pretrain off-the-shelf multilingual pretrained models before fine-tuning on the retrieval task. We show that our approach yields improvements in retrieval effectiveness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset