Inducing Semantic Grouping of Latent Concepts for Explanations: An Ante-Hoc Approach

08/25/2021
by   Anirban Sarkar, et al.
0

Self-explainable deep models are devised to represent the hidden concepts in the dataset without requiring any posthoc explanation generation technique. We worked with one of such models motivated by explicitly representing the classifier function as a linear function and showed that by exploiting probabilistic latent and properly modifying different parts of the model can result better explanation as well as provide superior predictive performance. Apart from standard visualization techniques, we proposed a new technique which can strengthen human understanding towards hidden concepts. We also proposed a technique of using two different self-supervision techniques to extract meaningful concepts related to the type of self-supervision considered and achieved significant performance boost. The most important aspect of our method is that it works nicely in a low data regime and reaches the desired accuracy in a few number of epochs. We reported exhaustive results with CIFAR10, CIFAR100, and AWA2 datasets to show effect of our method with moderate and relatively complex datasets.

READ FULL TEXT

page 7

page 11

research
05/11/2021

Rationalization through Concepts

Automated predictions require explanations to be interpretable by humans...
research
12/07/2021

Training Deep Models to be Explained with Fewer Examples

Although deep models achieve high predictive performance, it is difficul...
research
02/15/2018

Multimodal Explanations: Justifying Decisions and Pointing to the Evidence

Deep models that are both effective and explainable are desirable in man...
research
07/10/2023

Hierarchical Semantic Tree Concept Whitening for Interpretable Image Classification

With the popularity of deep neural networks (DNNs), model interpretabili...
research
05/11/2023

COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks

Transformer architectures are complex and their use in NLP, while it has...
research
07/10/2021

Using Causal Analysis for Conceptual Deep Learning Explanation

Model explainability is essential for the creation of trustworthy Machin...
research
07/04/2021

Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods

Detecting latent structure within a dataset is a crucial step in perform...

Please sign up or login with your details

Forgot password? Click here to reset