Exploring ways to incorporate additional knowledge to improve Natural Language Commonsense Question Answering

09/19/2019
by   Arindam Mitra, et al.
0

DARPA and Allen AI have proposed a collection of datasets to encourage research in Question Answering domains where (commonsense) knowledge is expected to play an important role. Recent language models such as BERT and GPT that have been pre-trained on Wikipedia articles and books, have shown decent performance with little fine-tuning on several such Multiple Choice Question-Answering (MCQ) datasets. Our goal in this work is to develop methods to incorporate additional (commonsense) knowledge into language model based approaches for better question answering in such domains. In this work we first identify external knowledge sources, and show that the performance further improves when a set of facts retrieved through IR is prepended to each MCQ question during both training and test phase. We then explore if the performance can be further improved by providing task specific knowledge in different manners or by employing different strategies for using the available knowledge. We present three different modes of passing knowledge and five different models of using knowledge including the standard BERT MCQ model. We also propose a novel architecture to deal with situations where information to answer the MCQ question is scattered over multiple knowledge sentences. We take 200 predictions from each of our best models and analyze how often the given knowledge is useful, how many times the given knowledge is useful but system failed to use it and some other metrices to see the scope of further improvements.

READ FULL TEXT
research
11/05/2020

Improving Commonsense Question Answering by Graph-based Iterative Retrieval over Multiple Knowledge Sources

In order to facilitate natural language understanding, the key is to eng...
research
04/11/2020

Unsupervised Commonsense Question Answering with Self-Talk

Natural language understanding involves reading between the lines with i...
research
01/15/2022

Kformer: Knowledge Injection in Transformer Feed-Forward Layers

Knowledge-Enhanced Model have developed a diverse set of techniques for ...
research
07/02/2020

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

Massive language models are the core of modern NLP modeling and have bee...
research
10/08/2020

Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition

Knowledge of a disease includes information of various aspects of the di...
research
04/12/2020

Explaining Question Answering Models through Text Generation

Large pre-trained language models (LMs) have been shown to perform surpr...
research
10/16/2019

Bridging the Knowledge Gap: Enhancing Question Answering with World and Domain Knowledge

In this paper we present OSCAR (Ontology-based Semantic Composition Augm...

Please sign up or login with your details

Forgot password? Click here to reset