Towards Generalizable Neuro-Symbolic Systems for Commonsense Question Answering

10/30/2019
by   Kaixin Ma, et al.
0

Non-extractive commonsense QA remains a challenging AI task, as it requires systems to reason about, synthesize, and gather disparate pieces of information, in order to generate responses to queries. Recent approaches on such tasks show increased performance, only when models are either pre-trained with additional information or when domain-specific heuristics are used, without any special consideration regarding the knowledge resource type. In this paper, we perform a survey of recent commonsense QA methods and we provide a systematic analysis of popular knowledge resources and knowledge-integration methods, across benchmarks from multiple commonsense datasets. Our results and analysis show that attention-based injection seems to be a preferable choice for knowledge integration and that the degree of domain overlap, between knowledge bases and datasets, plays a crucial role in determining model success.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2023

Multi-hop Commonsense Knowledge Injection Framework for Zero-Shot Commonsense Question Answering

Commonsense question answering (QA) research requires machines to answer...
research
09/11/2021

Semantic Categorization of Social Knowledge for Commonsense Question Answering

Large pre-trained language models (PLMs) have led to great success on va...
research
12/04/2022

Utilizing Background Knowledge for Robust Reasoning over Traffic Situations

Understanding novel situations in the traffic domain requires an intrica...
research
12/21/2020

Exploring and Analyzing Machine Commonsense Benchmarks

Commonsense question-answering (QA) tasks, in the form of benchmarks, ar...
research
12/24/2020

REM-Net: Recursive Erasure Memory Network for Commonsense Evidence Refinement

When answering a question, people often draw upon their rich world knowl...
research
03/14/2016

Controlling Search in Very large Commonsense Knowledge Bases: A Machine Learning Approach

Very large commonsense knowledge bases (KBs) often have thousands to mil...
research
12/29/2021

Materialized Knowledge Bases from Commonsense Transformers

Starting from the COMET methodology by Bosselut et al. (2019), generatin...

Please sign up or login with your details

Forgot password? Click here to reset