DeepAI
Log In Sign Up

When to Fold'em: How to answer Unanswerable questions

05/01/2021
by   Marshall Ho, et al.
0

We present 3 different question-answering models trained on the SQuAD2.0 dataset – BIDAF, DocumentQA and ALBERT Retro-Reader – demonstrating the improvement of language models in the past three years. Through our research in fine-tuning pre-trained models for question-answering, we developed a novel approach capable of achieving a 2 training time. Our method of re-initializing select layers of a parameter-shared language model is simple yet empirically powerful.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/14/2020

Utilizing Bidirectional Encoder Representations from Transformers for Answer Selection

Pre-training a transformer-based model for the language modeling task in...
09/18/2019

Pre-trained Language Model for Biomedical Question Answering

The recent success of question answering systems is largely attributed t...
04/20/2020

Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network

In this paper, we present a two stage model for multi-hop question answe...
02/10/2020

How Much Knowledge Can You Pack Into the Parameters of a Language Model?

It has recently been observed that neural language models trained on uns...
09/10/2020

Sanitizing Synthetic Training Data Generation for Question Answering over Knowledge Graphs

Synthetic data generation is important to training and evaluating neural...
07/12/2022

A Novel DeBERTa-based Model for Financial Question Answering Task

As a rising star in the field of natural language processing, question a...