Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering

08/22/2019
by   Zhiguo Wang, et al.
11

BERT model has been successfully applied to open-domain QA tasks. However, previous work trains BERT by viewing passages corresponding to the same question as independent training instances, which may cause incomparable scores for answers from different passages. To tackle this issue, we propose a multi-passage BERT model to globally normalize answer scores across all passages of the same question, and this change enables our QA model find better answers by utilizing more passages. In addition, we find that splitting articles into passages with the length of 100 words by sliding window improves performance by 4 passages, multi-passage BERT gains additional 2 benchmarks showed that our multi-passage BERT outperforms all state-of-the-art models on all benchmarks. In particular, on the OpenSQuAD dataset, our model gains 21.4 F_1 over BERT-based models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset