When to Fold'em: How to answer Unanswerable questions

05/01/2021 ∙ by Marshall Ho, et al. ∙ 0

We present 3 different question-answering models trained on the SQuAD2.0 dataset – BIDAF, DocumentQA and ALBERT Retro-Reader – demonstrating the improvement of language models in the past three years. Through our research in fine-tuning pre-trained models for question-answering, we developed a novel approach capable of achieving a 2 training time. Our method of re-initializing select layers of a parameter-shared language model is simple yet empirically powerful.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.