Logic Explained Networks

08/11/2021
by   Gabriele Ciravegna, et al.
12

The large and still increasing popularity of deep learning clashes with a major limit of neural network architectures, that consists in their lack of capability in providing human-understandable motivations of their decisions. In situations in which the machine is expected to support the decision of human experts, providing a comprehensible explanation is a feature of crucial importance. The language used to communicate the explanations must be formal enough to be implementable in a machine and friendly enough to be understandable by a wide audience. In this paper, we propose a general approach to Explainable Artificial Intelligence in the case of neural architectures, showing how a mindful design of the networks leads to a family of interpretable deep learning models called Logic Explained Networks (LENs). LENs only require their inputs to be human-understandable predicates, and they provide explanations in terms of simple First-Order Logic (FOL) formulas involving such predicates. LENs are general enough to cover a large number of scenarios. Amongst them, we consider the case in which LENs are directly used as special classifiers with the capability of being explainable, or when they act as additional networks with the role of creating the conditions for making a black-box classifier explainable by FOL formulas. Despite supervised learning problems are mostly emphasized, we also show that LENs can learn and provide explanations in unsupervised learning settings. Experimental results on several datasets and tasks show that LENs may yield better classifications than established white-box models, such as decision trees and Bayesian rule lists, while providing more compact and meaningful explanations.

READ FULL TEXT

page 9

page 10

page 11

page 17

page 23

research
05/05/2021

Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

In the present paper we present the potential of Explainable Artificial ...
research
06/12/2021

Entropy-based Logic Explanations of Neural Networks

Explainable artificial intelligence has rapidly emerged since lawmakers ...
research
06/27/2023

Delivering Inflated Explanations

In the quest for Explainable Artificial Intelligence (XAI) one of the qu...
research
01/04/2022

McXai: Local model-agnostic explanation as two games

To this day, a variety of approaches for providing local interpretabilit...
research
12/17/2022

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Deep learning models for learning analytics have become increasingly pop...
research
05/14/2020

Evolved Explainable Classifications for Lymph Node Metastases

A novel evolutionary approach for Explainable Artificial Intelligence is...
research
05/16/2021

Abstraction, Validation, and Generalization for Explainable Artificial Intelligence

Neural network architectures are achieving superhuman performance on an ...

Please sign up or login with your details

Forgot password? Click here to reset