Understanding CNN Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning

08/08/2023
by   Abhilekha Dalal, et al.
0

A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would provide insights into the question of what a deep learning system has internally detected as relevant on the input, demystifying the otherwise black-box character of deep learning systems. The state of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans, but systematic automated methods that would be able to hypothesize and verify interpretations of hidden neuron activations are underexplored. In this paper, we provide such a method and demonstrate that it provides meaningful interpretations. Our approach is based on using large-scale background knowledge approximately 2 million classes curated from the Wikipedia concept hierarchy together with a symbolic reasoning approach called Concept Induction based on description logics, originally developed for applications in the Semantic Web field. Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network through a hypothesis and verification process.

READ FULL TEXT

page 5

page 7

research
01/23/2023

Explaining Deep Learning Hidden Neuron Activations using Concept Induction

One of the current key challenges in Explainable AI is in correctly inte...
research
09/27/2022

Towards Human-Compatible XAI: Explaining Data Differentials with Concept Induction over Background Knowledge

Concept induction, which is based on formal logical reasoning over descr...
research
12/13/2019

TopoAct: Exploring the Shape of Activations in Deep Learning

Deep neural networks such as GoogLeNet and ResNet have achieved superhum...
research
03/16/2018

Dynamic-structured Semantic Propagation Network

Semantic concept hierarchy is still under-explored for semantic segmenta...
research
01/15/2022

Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time Planetary Explorations

Deep learning (DL) has proven to be an effective machine learning and co...
research
07/24/2023

Concept backpropagation: An Explainable AI approach for visualising learned concepts in neural network models

Neural network models are widely used in a variety of domains, often as ...
research
11/23/2020

Deep Learning for Automatic Quality Grading of Mangoes: Methods and Insights

The quality grading of mangoes is a crucial task for mango growers as it...

Please sign up or login with your details

Forgot password? Click here to reset