DeepAI
Log In Sign Up

Techniques to Improve Q A Accuracy with Transformer-based models on Large Complex Documents

09/26/2020
by   Chejui Liao, et al.
0

This paper discusses the effectiveness of various text processing techniques, their combinations, and encodings to achieve a reduction of complexity and size in a given text corpus. The simplified text corpus is sent to BERT (or similar transformer based models) for question and answering and can produce more relevant responses to user queries. This paper takes a scientific approach to determine the benefits and effectiveness of various techniques and concludes a best-fit combination that produces a statistically significant improvement in accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/16/2020

Delaying Interaction Layers in Transformer-based Encoders for Efficient Open Domain Question Answering

Open Domain Question Answering (ODQA) on a large-scale corpus of documen...
05/25/2020

Dialect Text Normalization to Normative Standard Finnish

We compare different LSTMs and transformer models in terms of their effe...
08/22/2020

FAT ALBERT: Finding Answers in Large Texts using Semantic Similarity Attention Layer based on BERT

Machine based text comprehension has always been a significant research ...
10/27/2020

MMFT-BERT: Multimodal Fusion Transformer with BERT Encodings for Visual Question Answering

We present MMFT-BERT(MultiModal Fusion Transformer with BERT encodings),...
05/31/2021

Corpus-Based Paraphrase Detection Experiments and Review

Paraphrase detection is important for a number of applications, includin...
05/05/2020

The Cascade Transformer: an Application for Efficient Answer Sentence Selection

Large transformer-based language models have been shown to be very effec...
08/31/2019

Quantity doesn't buy quality syntax with neural language models

Recurrent neural networks can learn to predict upcoming words remarkably...