Semantic interpretation for convolutional neural networks: What makes a cat a cat?

04/16/2022
by   Hao Xu, et al.
208

The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the "black box" model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, we introduce the framework of semantic explainable AI (S-XAI), which utilizes row-centered principal component analysis to obtain the common traits from the best combination of superpixels discovered by a genetic algorithm, and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed for the first time. Our experimental results demonstrate that S-XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

page 8

page 11

page 24

page 28

page 30

page 32

02/02/2018

Visual Interpretability for Deep Learning: a Survey

This paper reviews recent studies in emerging directions of understandin...
02/15/2021

Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks

Explainable AI (XAI) is an active research area to interpret a neural ne...
09/21/2021

Multiblock-Networks: A Neural Network Analog to Component Based Methods for Multi-Source Data

Training predictive models on datasets from multiple sources is a common...
04/20/2021

Explainable artificial intelligence for mechanics: physics-informing neural networks for constitutive models

(Artificial) neural networks have become increasingly popular in mechani...
07/05/2018

Explainable Learning: Implicit Generative Modelling during Training for Adversarial Robustness

We introduce Explainable Learning ,ExL, an approach for training neural ...
02/05/2021

Convolutional Neural Network Interpretability with General Pattern Theory

Ongoing efforts to understand deep neural networks (DNN) have provided m...
01/18/2021

Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation

With the wide use of deep neural networks (DNN), model interpretability ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.