Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation

01/04/2021
by   Ning Bian, et al.
0

A fundamental ability of humans is to utilize commonsense knowledge in language understanding and question answering. In recent years, many knowledge-enhanced Commonsense Question Answering (CQA) approaches have been proposed. However, it remains unclear: (1) How far can we get by exploiting external knowledge for CQA? (2) How much potential of knowledge has been exploited in current CQA models? (3) Which are the most promising directions for future CQA? To answer these questions, we benchmark knowledge-enhanced CQA by conducting extensive experiments on multiple standard CQA datasets using a simple and effective knowledge-to-text transformation framework. Experiments show that: (1) Our knowledge-to-text framework is effective and achieves state-of-the-art performance on CommonsenseQA dataset, providing a simple and strong knowledge-enhanced baseline for CQA; (2) The potential of knowledge is still far from being fully exploited in CQA – there is a significant performance gap from current models to our models with golden knowledge; and (3) Context-sensitive knowledge selection, heterogeneous knowledge exploitation, and commonsense-rich language models are promising CQA directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2023

ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models

Large language models (LLMs) such as ChatGPT and GPT-4 have made signifi...
research
09/02/2022

Elaboration-Generating Commonsense Question Answering at Scale

In question answering requiring common sense, language models (e.g., GPT...
research
09/20/2023

Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering

Despite their competitive performance on knowledge-intensive tasks, larg...
research
01/17/2022

Generalizable Neuro-symbolic Systems for Commonsense Question Answering

This chapter illustrates how suitable neuro-symbolic models for language...
research
09/01/2022

Why Do Neural Language Models Still Need Commonsense Knowledge to Handle Semantic Variations in Question Answering?

Many contextualized word representations are now learned by intricate ne...
research
02/01/2021

Commonsense Knowledge Mining from Term Definitions

Commonsense knowledge has proven to be beneficial to a variety of applic...
research
05/11/2023

Overinformative Question Answering by Humans and Machines

When faced with a polar question, speakers often provide overinformative...

Please sign up or login with your details

Forgot password? Click here to reset