Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations

11/26/2020
by   Aditya Kalyanpur, et al.
0

Traditional symbolic reasoning engines, while attractive for their precision and explicability, have a few major drawbacks: the use of brittle inference procedures that rely on exact matching (unification) of logical terms, an inability to deal with uncertainty, and the need for a precompiled rule-base of knowledge (the "knowledge acquisition" problem). These issues are particularly severe for the Natural Language Understanding (NLU) task, where we often use implicit background knowledge to understand and reason about text, resort to fuzzy alignment of concepts and relations during reasoning, and constantly deal with ambiguity in representations. To address these issues, we devise a novel FOL-based reasoner, called Braid, that supports probabilistic rules, and uses the notion of custom unification functions and dynamic rule generation to overcome the brittle matching and knowledge-gap problem prevalent in traditional reasoners. In this paper, we describe the reasoning algorithms used in Braid-BC (the backchaining component of Braid), and their implementation in a distributed task-based framework that builds proof/explanation graphs for an input query in a scalable manner. We use a simple QA example from a children's story to motivate Braid-BC's design and explain how the various components work together to produce a coherent logical explanation.

READ FULL TEXT
research
03/20/2022

A Neural-Symbolic Approach to Natural Language Understanding

Deep neural networks, empowered by pre-trained language models, have ach...
research
11/07/2022

Zero-Shot Classification by Logical Reasoning on Natural Language Explanations

Humans can classify an unseen category by reasoning on its language expl...
research
03/21/2023

Logical Reasoning over Natural Language as Knowledge Representation: A Survey

Logical reasoning is central to human cognition and intelligence. Past r...
research
10/24/2019

GF + MMT = GLF – From Language to Semantics through LF

These days, vast amounts of knowledge are available online, most of it i...
research
05/29/2021

NeuralLog: Natural Language Inference with Joint Neural and Logical Reasoning

Deep learning (DL) based language models achieve high performance on var...
research
12/21/2022

CORRPUS: Detecting Story Inconsistencies via Codex-Bootstrapped Neurosymbolic Reasoning

Story generation and understanding – as with all NLG/NLU tasks – has see...
research
09/16/2022

Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System

We present an approach for systematic reasoning that produces human inte...

Please sign up or login with your details

Forgot password? Click here to reset