Ensembling Strategies for Answering Natural Questions

10/30/2019
by   Anthony Ferritto, et al.
0

Many of the top question answering systems today utilize ensembling to improve their performance on tasks such as the Stanford Question Answering Dataset (SQuAD) and Natural Questions (NQ) challenges. Unfortunately most of these systems do not publish their ensembling strategies used in their leaderboard submissions. In this work, we investigate a number of ensembling techniques and demonstrate a strategy which improves our F1 score for short answers on the dev set for NQ by 2.3 F1 points over our single model (which outperforms the previous SOTA by 1.9 F1 points).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2021

Transformer-Based Models for Question Answering on COVID19

In response to the Kaggle's COVID-19 Open Research Dataset (CORD-19) cha...
research
05/29/2021

Is Sluice Resolution really just Question Answering?

Sluice resolution is a problem where a system needs to output the corres...
research
09/11/2019

Frustratingly Easy Natural Question Answering

Existing literature on Question Answering (QA) mostly focuses on algorit...
research
04/30/2020

RikiNet: Reading Wikipedia Pages for Natural Question Answering

Reading long documents to answer open-domain questions remains challengi...
research
05/24/2023

Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering

We train a language model (LM) to robustly answer multistep questions by...
research
01/06/2021

EfficientQA : a RoBERTa Based Phrase-Indexed Question-Answering System

State-of-the-art extractive question answering models achieve superhuman...
research
01/26/2022

An Automated Question-Answering Framework Based on Evolution Algorithm

Building a deep learning model for a Question-Answering (QA) task requir...

Please sign up or login with your details

Forgot password? Click here to reset