Weighted defeasible knowledge bases and a multipreference semantics for a deep neural network model

12/24/2020
by   Laura Giordano, et al.
0

In this paper we investigate the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and a deep neural network model. Weighted knowledge bases for description logics are considered under a "concept-wise" multipreference semantics. The semantics is further extended to fuzzy interpretations and exploited to provide a preferential interpretation of Multilayer Perceptrons.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/07/2021

From Weighted Conditionals of Multilayer Perceptrons to a Gradual Argumentation Semantics

A fuzzy multipreference semantics has been recently proposed for weighte...
04/01/2022

Extracting Rules from Neural Networks with Partial Interpretations

We investigate the problem of extracting rules, expressed in Horn logic,...
02/02/2022

An ASP approach for reasoning on neural networks under a finitely many-valued semantics for weighted conditional knowledge bases

Weighted knowledge bases for description logics with typicality have bee...
09/17/2021

Weighted Conditional EL^bot Knowledge Bases with Integer Weights: an ASP Approach

Weighted knowledge bases for description logics with typicality have bee...
07/10/2021

From Common Sense Reasoning to Neural Network Models through Multiple Preferences: an overview

In this paper we discuss the relationships between conditional and prefe...
08/30/2020

On a plausible concept-wise multipreference semantics and its relations with self-organising maps

Inthispaperwedescribeaconcept-wisemulti-preferencesemantics for descript...
08/26/2020

How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels

Explaining to users why automated systems make certain mistakes is impor...