DeepAI AI Chat
Log In Sign Up

Utilizing Bidirectional Encoder Representations from Transformers for Answer Selection

11/14/2020
by   Md Tahmid Rahman Laskar, et al.
0

Pre-training a transformer-based model for the language modeling task in a large dataset and then fine-tuning it for downstream tasks has been found very useful in recent years. One major advantage of such pre-trained language models is that they can effectively absorb the context of each word in a sentence. However, for tasks such as the answer selection task, the pre-trained language models have not been extensively used yet. To investigate their effectiveness in such tasks, in this paper, we adopt the pre-trained Bidirectional Encoder Representations from Transformer (BERT) language model and fine-tune it on two Question Answering (QA) datasets and three Community Question Answering (CQA) datasets for the answer selection task. We find that fine-tuning the BERT model for the answer selection task is very effective and observe a maximum improvement of 13.1 to the previous state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/04/2019

Exploring Neural Net Augmentation to BERT for Question Answering on SQUAD 2.0

Enhancing machine capabilities to answer questions has been a topic of c...
10/07/2021

A Comparative Study of Transformer-Based Language Models on Extractive Question Answering

Question Answering (QA) is a task in natural language processing that ha...
11/11/2019

TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection

We propose TANDA, an effective technique for fine-tuning pre-trained Tra...
05/01/2021

When to Fold'em: How to answer Unanswerable questions

We present 3 different question-answering models trained on the SQuAD2.0...
09/05/2022

Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples

Recent advances in the development of large language models have resulte...
06/18/2021

SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs

In this paper, we propose SPBERT, a transformer-based language model pre...