Interpretable Deep Convolutional Neural Networks via Meta-learning

02/02/2018
by   Xuan Liu, et al.
0

Model interpretability is a requirement in many applications in which crucial decisions are made by users relying on a model's outputs. The recent movement for "algorithmic fairness" also stipulates explainability, and therefore interpretability of learning models. And yet the most successful contemporary Machine Learning approaches, the Deep Neural Networks, produce models that are highly non-interpretable. We attempt to address this challenge by proposing a technique called CNN-INTE to interpret deep Convolutional Neural Networks (CNN) via meta-learning. In this work, we interpret a specific hidden layer of the deep CNN model on the MNIST image dataset. We use a clustering algorithm in a two-level structure to find the meta-level training data and Random Forest as base learning algorithms to generate the meta-level test data. The interpretation results are displayed visually via diagrams, which clearly indicates how a specific test instance is classified. Our method achieves global interpretation for all the test instances without sacrificing the accuracy obtained by the original deep CNN model. This means our model is faithful to the deep CNN model, which leads to reliable interpretations.

READ FULL TEXT

page 7

page 8

research
05/20/2022

FIND:Explainable Framework for Meta-learning

Meta-learning is used to efficiently enable the automatic selection of m...
research
08/13/2021

Finding Representative Interpretations on Convolutional Neural Networks

Interpreting the decision logic behind effective deep convolutional neur...
research
11/15/2020

Debiasing Convolutional Neural Networks via Meta Orthogonalization

While deep learning models often achieve strong task performance, their ...
research
09/20/2021

A Meta-Learning Approach for Training Explainable Graph Neural Networks

In this paper, we investigate the degree of explainability of graph neur...
research
04/02/2022

AutoProtoNet: Interpretability for Prototypical Networks

In meta-learning approaches, it is difficult for a practitioner to make ...
research
12/18/2019

Meta-Learned Per-Instance Algorithm Selection in Scholarly Recommender Systems

The effectiveness of recommender system algorithms varies in different r...
research
08/14/2018

Generative Invertible Networks (GIN): Pathophysiology-Interpretable Feature Mapping and Virtual Patient Generation

Machine learning methods play increasingly important roles in pre-proced...

Please sign up or login with your details

Forgot password? Click here to reset