Unsupervised Natural Question Answering with a Small Model

11/19/2019
by   Martin Andrews, et al.
0

The recent (2019-02) demonstration of the power of huge language models such as GPT-2 to memorise the answers to factoid questions raises questions about the extent to which knowledge is being embedded directly within these large models. This short paper describes an architecture through which much smaller models can also answer such questions - by making use of 'raw' external knowledge. The contribution of this work is that the methods presented here rely on unsupervised learning techniques, complementing the unsupervised training of the Language Model. The goal of this line of research is to be able to add knowledge explicitly, without extensive training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2020

How Much Knowledge Can You Pack Into the Parameters of a Language Model?

It has recently been observed that neural language models trained on uns...
research
09/24/2020

Toward a Thermodynamics of Meaning

As language models such as GPT-3 become increasingly successful at gener...
research
11/04/2019

BAS: An Answer Selection Method Using BERT Language Model

In recent years, Question Answering systems have become more popular and...
research
06/07/2023

Enhancing In-Context Learning with Answer Feedback for Multi-Span Question Answering

Whereas the recent emergence of large language models (LLMs) like ChatGP...
research
06/12/2019

Unsupervised Question Answering by Cloze Translation

Obtaining training data for Question Answering (QA) is time-consuming an...
research
09/18/2023

What does ChatGPT know about natural science and engineering?

ChatGPT is a powerful language model from OpenAI that is arguably able t...
research
06/16/2023

Learning to Summarize and Answer Questions about a Virtual Robot's Past Actions

When robots perform long action sequences, users will want to easily and...

Please sign up or login with your details

Forgot password? Click here to reset