Unsupervised Natural Question Answering with a Small Model

by   Martin Andrews, et al.

The recent (2019-02) demonstration of the power of huge language models such as GPT-2 to memorise the answers to factoid questions raises questions about the extent to which knowledge is being embedded directly within these large models. This short paper describes an architecture through which much smaller models can also answer such questions - by making use of 'raw' external knowledge. The contribution of this work is that the methods presented here rely on unsupervised learning techniques, complementing the unsupervised training of the Language Model. The goal of this line of research is to be able to add knowledge explicitly, without extensive training.


page 1

page 2

page 3

page 4


How Much Knowledge Can You Pack Into the Parameters of a Language Model?

It has recently been observed that neural language models trained on uns...

Toward a Thermodynamics of Meaning

As language models such as GPT-3 become increasingly successful at gener...

BAS: An Answer Selection Method Using BERT Language Model

In recent years, Question Answering systems have become more popular and...

Enhancing In-Context Learning with Answer Feedback for Multi-Span Question Answering

Whereas the recent emergence of large language models (LLMs) like ChatGP...

Unsupervised Question Answering by Cloze Translation

Obtaining training data for Question Answering (QA) is time-consuming an...

What does ChatGPT know about natural science and engineering?

ChatGPT is a powerful language model from OpenAI that is arguably able t...

Learning to Summarize and Answer Questions about a Virtual Robot's Past Actions

When robots perform long action sequences, users will want to easily and...

Please sign up or login with your details

Forgot password? Click here to reset