A BERT Baseline for the Natural Questions

01/24/2019
by   Chris Alberti, et al.
0

This technical note describes a new baseline for the Natural Questions. Our model is based on BERT and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30 for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard at ai.google.com/research/NaturalQuestions and we plan to opensource the code for it in the near future.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

RikiNet: Reading Wikipedia Pages for Natural Question Answering

Reading long documents to answer open-domain questions remains challengi...
research
08/24/2019

BERT for Coreference Resolution: Baselines and Analysis

We apply BERT to coreference resolution, achieving strong improvements o...
research
05/12/2021

Better than BERT but Worse than Baseline

This paper compares BERT-SQuAD and Ab3P on the Abbreviation Definition I...
research
05/26/2020

What Are People Asking About COVID-19? A Question Classification Dataset

We present COVID-Q, a set of 1,690 questions about COVID-19 from 13 sour...
research
10/19/2020

Question Generation for Supporting Informational Query Intents

Users frequently ask simple factoid questions when encountering question...
research
04/12/2020

AMR Parsing via Graph-Sequence Iterative Inference

We propose a new end-to-end model that treats AMR parsing as a series of...
research
04/04/2023

GPT-4 to GPT-3.5: 'Hold My Scalpel' – A Look at the Competency of OpenAI's GPT on the Plastic Surgery In-Service Training Exam

The Plastic Surgery In-Service Training Exam (PSITE) is an important ind...

Please sign up or login with your details

Forgot password? Click here to reset