ArT: All-round Thinker for Unsupervised Commonsense Question-Answering

12/26/2021
by   Jiawei Wang, et al.
0

Without labeled question-answer pairs for necessary training, unsupervised commonsense question-answering (QA) appears to be extremely challenging due to its indispensable unique prerequisite on commonsense source like knowledge bases (KBs), which are usually highly resource consuming in construction. Recently pre-trained language models (PrLMs) show effectiveness as an alternative for commonsense clues when they play a role of knowledge generator. However, existing work simply generates hundreds of pseudo-answers, or roughly performs knowledge generation according to templates once for all, which may result in much noise and thus hinders the quality of generated knowledge. Motivated by human thinking experience, we propose an approach of All-round Thinker (ArT) by fully taking association during knowledge generating. In detail, our model first focuses on key parts in the given context, and then generates highly related knowledge on such a basis in an association way like human thinking. Besides, for casual reasoning, a reverse thinking mechanism is proposed to conduct bidirectional inferring between cause and effect. ArT is totally unsupervised and KBs-free. We evaluate it on three commonsense QA benchmarks: COPA, SocialIQA and SCT. On all scales of PrLM backbones, ArT shows its brilliant performance and outperforms previous advanced unsupervised models.

READ FULL TEXT
research
11/24/2022

TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense Question Answering

Unsupervised commonsense question answering requires mining effective co...
research
09/02/2022

Elaboration-Generating Commonsense Question Answering at Scale

In question answering requiring common sense, language models (e.g., GPT...
research
09/11/2021

Semantic Categorization of Social Knowledge for Commonsense Question Answering

Large pre-trained language models (PLMs) have led to great success on va...
research
05/31/2021

A Semantic-based Method for Unsupervised Commonsense Question Answering

Unsupervised commonsense question answering is appealing since it does n...
research
04/07/2023

Language Models are Causal Knowledge Extractors for Zero-shot Video Question Answering

Causal Video Question Answering (CVidQA) queries not only association or...
research
09/11/2020

An Atlas of Cultural Commonsense for Machine Reasoning

Existing commonsense reasoning datasets for AI and NLP tasks fail to add...
research
12/24/2020

REM-Net: Recursive Erasure Memory Network for Commonsense Evidence Refinement

When answering a question, people often draw upon their rich world knowl...

Please sign up or login with your details

Forgot password? Click here to reset