Generalizable Neuro-symbolic Systems for Commonsense Question Answering

01/17/2022
by   Alessandro Oltramari, et al.
0

This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/02/2022

Elaboration-Generating Commonsense Question Answering at Scale

In question answering requiring common sense, language models (e.g., GPT...
research
08/25/2023

Rethinking Language Models as Symbolic Knowledge Graphs

Symbolic knowledge graphs (KGs) play a pivotal role in knowledge-centric...
research
02/01/2021

Commonsense Knowledge Mining from Term Definitions

Commonsense knowledge has proven to be beneficial to a variety of applic...
research
01/04/2021

Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation

A fundamental ability of humans is to utilize commonsense knowledge in l...
research
07/02/2020

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

Massive language models are the core of modern NLP modeling and have bee...
research
05/25/2023

BUCA: A Binary Classification Approach to Unsupervised Commonsense Question Answering

Unsupervised commonsense reasoning (UCR) is becoming increasingly popula...
research
10/24/2020

Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation

Symbolic knowledge (e.g., entities, relations, and facts in a knowledge ...

Please sign up or login with your details

Forgot password? Click here to reset