NeSyFOLD: A System for Generating Logic-based Explanations from Convolutional Neural Networks

01/30/2023
by   Parth Padalkar, et al.
0

We present a novel neurosymbolic system called NeSyFOLD that classifies images while providing a logic-based explanation of the classification. NeSyFOLD's training process is as follows: (i) We first pre-train a CNN on the input image dataset and extract activations of the last layer filters as binary values; (ii) Next, we use the FOLD-SE-M rule-based machine learning algorithm to generate a logic program that can classify an image – represented as a vector of binary activations corresponding to each filter – while producing a logical explanation. The rules generated by the FOLD-SE-M algorithm have filter numbers as predicates. We use a novel algorithm that we have devised for automatically mapping the CNN filters to semantic concepts in the images. This mapping is used to replace predicate names (filter numbers) in the rule-set with corresponding semantic concept labels. The resulting rule-set is highly interpretable, and can be intuitively understood by humans. We compare our NeSyFOLD system with the ERIC system that uses a decision-tree like algorithm to obtain the rules. Our system has the following advantages over ERIC: (i) NeSyFOLD generates smaller rule-sets without compromising on the accuracy and fidelity; (ii) NeSyFOLD generates the mapping of filter numbers to semantic labels automatically.

READ FULL TEXT
research
08/16/2022

FOLD-SE: Scalable Explainable AI

FOLD-R++ is a highly efficient and explainable rule-based machine learni...
research
07/07/2000

Using Learning-based Filters to Detect Rule-based Filtering Obsolescence

For years, Caisse des Depots et Consignations has produced information f...
research
04/10/2022

Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention

Interpretability is an important property for visual models as it helps ...
research
12/15/2020

Rule Extraction from Binary Neural Networks with Convolutional Rules for Model Validation

Most deep neural networks are considered to be black boxes, meaning thei...
research
04/11/2023

CGXplain: Rule-Based Deep Neural Network Explanations Using Dual Linear Programs

Rule-based surrogate models are an effective and interpretable way to ap...
research
06/15/2022

FOLD-TR: A Scalable and Efficient Inductive Learning Algorithm for Learning To Rank

FOLD-R++ is a new inductive learning algorithm for binary classification...
research
02/14/2022

FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for Multi-Category Classification of Mixed Data

FOLD-RM is an automated inductive learning algorithm for learning defaul...

Please sign up or login with your details

Forgot password? Click here to reset