Interpretable Neural-Symbolic Concept Reasoning

04/27/2023
by   Pietro Barbiero, et al.
0

Deep learning methods are highly accurate, yet their opaque decision process prevents them from earning full human trust. Concept-based models aim to address this issue by learning tasks based on a set of human-understandable concepts. However, state-of-the-art concept-based models rely on high-dimensional concept embedding representations which lack a clear semantic meaning, thus questioning the interpretability of their decision process. To overcome this limitation, we propose the Deep Concept Reasoner (DCR), the first interpretable concept-based model that builds upon concept embeddings. In DCR, neural networks do not make task predictions directly, but they build syntactic rule structures using concept embeddings. DCR then executes these rules on meaningful concept truth degrees to provide a final interpretable and semantically-consistent prediction in a differentiable manner. Our experiments show that DCR: (i) improves up to +25 concept-based models on challenging benchmarks (ii) discovers meaningful logic rules matching known ground truths even in the absence of concept supervision during training, and (iii), facilitates the generation of counterfactual examples providing the learnt rules as guidance.

READ FULL TEXT

page 6

page 18

research
08/23/2023

Relational Concept Based Models

The design of interpretable deep learning models working in relational d...
research
07/27/2022

Encoding Concepts in Graph Neural Networks

The opaque reasoning of Graph Neural Networks induces a lack of human tr...
research
05/31/2022

GlanceNets: Interpretabile, Leak-proof Concept-based Models

There is growing interest in concept-based models (CBMs) that combine hi...
research
09/19/2022

Concept Embedding Models

Deploying AI-powered systems requires trustworthy models supporting effe...
research
01/11/2021

Learning Semantically Meaningful Features for Interpretable Classifications

Learning semantically meaningful features is important for Deep Neural N...
research
12/04/2020

Learning Interpretable Concept-Based Models with Human Feedback

Machine learning models that first learn a representation of a domain in...
research
12/15/2019

One-Shot Induction of Generalized Logical Concepts via Human Guidance

We consider the problem of learning generalized first-order representati...

Please sign up or login with your details

Forgot password? Click here to reset