Neuro-symbolic Architectures for Context Understanding

03/09/2020
by   Alessandro Oltramari, et al.
0

Computational context understanding refers to an agent's ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artificial intelligence (AI). Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. However, while data-driven methods seek to model the statistical regularities of events by making observations in the real-world, they remain difficult to interpret and they lack mechanisms for naturally incorporating external knowledge. Conversely, knowledge-driven methods, combine structured knowledge bases, perform symbolic reasoning based on axiomatic principles, and are more interpretable in their inferential processing; however, they often lack the ability to estimate the statistical salience of an inference. To combat these issues, we propose the use of hybrid AI methodology as a general framework for combining the strengths of both approaches. Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks. We further ground our discussion in two applications of neuro-symbolism and, in both cases, show that our systems maintain interpretability while achieving comparable performance, relative to the state-of-the-art.

READ FULL TEXT
research
08/30/2023

Beyond Traditional Neural Networks: Toward adding Reasoning and Learning Capabilities through Computational Logic Techniques

Deep Learning (DL) models have become popular for solving complex proble...
research
05/15/2019

Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning

Current advances in Artificial Intelligence and machine learning in gene...
research
04/15/2020

Human Evaluation of Interpretability: The Case of AI-Generated Music Knowledge

Interpretability of machine learning models has gained more and more att...
research
10/07/2022

1st ICLR International Workshop on Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data (PAIR^2Struct)

Recent years have seen advances on principles and guidance relating to a...
research
11/01/2014

Complex Events Recognition under Uncertainty in a Sensor Network

Automated extraction of semantic information from a network of sensors f...
research
02/23/2021

Modular Design Patterns for Hybrid Learning and Reasoning Systems: a taxonomy, patterns and use cases

The unification of statistical (data-driven) and symbolic (knowledge-dri...
research
02/11/2021

A Metamodel and Framework for Artificial General Intelligence From Theory to Practice

This paper introduces a new metamodel-based knowledge representation tha...

Please sign up or login with your details

Forgot password? Click here to reset