DeepAI
Log In Sign Up

Provable concept learning for interpretable predictions using variational inference

04/01/2022
by   Armeen Taeb, et al.
6

In safety critical applications, practitioners are reluctant to trust neural networks when no interpretable explanations are available. Many attempts to provide such explanations revolve around pixel level attributions or use previously known concepts. In this paper we aim to provide explanations by provably identifying high-level, previously unknown concepts. To this end, we propose a probabilistic modeling framework to derive (C)oncept (L)earning and (P)rediction (CLAP) – a VAE-based classifier that uses visually interpretable concepts as linear predictors. Assuming that the data generating mechanism involves predictive concepts, we prove that our method is able to identify them while attaining optimal classification accuracy. We use synthetic experiments for validation, and also show that on real-world (PlantVillage and ChestXRay) datasets, CLAP effectively discovers interpretable factors for classifying diseases.

READ FULL TEXT

page 11

page 12

page 13

page 31

page 32

page 34

page 35

page 36

06/12/2021

Entropy-based Logic Explanations of Neural Networks

Explainable artificial intelligence has rapidly emerged since lawmakers ...
05/11/2021

Rationalization through Concepts

Automated predictions require explanations to be interpretable by humans...
05/31/2018

DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation

We propose DeepMiner, a framework to discover interpretable representati...
10/17/2019

On Concept-Based Explanations in Deep Neural Networks

Deep neural networks (DNNs) build high-level intelligence on low-level r...
08/20/2021

VAE-CE: Visual Contrastive Explanation using Disentangled VAEs

The goal of a classification model is to assign the correct labels to da...
11/22/2022

Towards Human-Interpretable Prototypes for Visual Assessment of Image Classification Models

Explaining black-box Artificial Intelligence (AI) models is a cornerston...

Code Repositories

CLAP-interpretable-predictions

Official codebase for the paper "Provable concept learning for interpretable predictions using variational inference".


view repo