Entropy-based Logic Explanations of Neural Networks

06/12/2021
by   Pietro Barbiero, et al.
0

Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains. Concept-based neural networks have arisen as explainable-by-design methods as they leverage human-understandable symbols (i.e. concepts) to predict class memberships. However, most of these approaches focus on the identification of the most relevant concepts but do not provide concise, formal explanations of how such concepts are leveraged by the classifier to make predictions. In this paper, we propose a novel end-to-end differentiable approach enabling the extraction of logic explanations from neural networks using the formalism of First-Order Logic. The method relies on an entropy-based criterion which automatically identifies the most relevant concepts. We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy.

READ FULL TEXT
research
04/01/2022

Provable concept learning for interpretable predictions using variational inference

In safety critical applications, practitioners are reluctant to trust ne...
research
01/03/2022

Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings

One major drawback of deep convolutional neural networks (CNNs) for use ...
research
08/11/2021

Logic Explained Networks

The large and still increasing popularity of deep learning clashes with ...
research
01/16/2023

Img2Tab: Automatic Class Relevant Concept Discovery from StyleGAN Features for Explainable Image Classification

Traditional tabular classifiers provide explainable decision-making with...
research
07/27/2022

Encoding Concepts in Graph Neural Networks

The opaque reasoning of Graph Neural Networks induces a lack of human tr...
research
08/16/2021

Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision

Convolutional neural networks (CNNs) have become an established part of ...
research
01/25/2022

Explanatory Learning: Beyond Empiricism in Neural Networks

We introduce Explanatory Learning (EL), a framework to let machines use ...

Please sign up or login with your details

Forgot password? Click here to reset