Scale-Preserving Automatic Concept Extraction (SPACE)

Convolutional Neural Networks (CNN) have become a common choice for industrial quality control, as well as other critical applications in the Industry 4.0. When these CNNs behave in ways unexpected to human users or developers, severe consequences can arise, such as economic losses or an increased risk to human life. Concept extraction techniques can be applied to increase the reliability and transparency of CNNs through generating global explanations for trained neural network models. The decisive features of image datasets in quality control often depend on the feature's scale; for example, the size of a hole or an edge. However, existing concept extraction methods do not correctly represent scale, which leads to problems interpreting these models as we show herein. To address this issue, we introduce the Scale-Preserving Automatic Concept Extraction (SPACE) algorithm, as a state-of-the-art alternative concept extraction technique for CNNs, focused on industrial applications. SPACE is specifically designed to overcome the aforementioned problems by avoiding scale changes throughout the concept extraction process. SPACE proposes an approach based on square slices of input images, which are selected and then tiled before being clustered into concepts. Our method provides explanations of the models' decision-making process in the form of human-understandable concepts. We evaluate SPACE on three image classification datasets in the context of industrial quality control. Through experimental results, we illustrate how SPACE outperforms other methods and provides actionable insights on the decision mechanisms of CNNs. Finally, code for the implementation of SPACE is provided.

READ FULL TEXT

page 18

page 20

page 31

research
06/06/2023

Scalable Concept Extraction in Industry 4.0

The industry 4.0 is leveraging digital technologies and machine learning...
research
03/27/2023

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

Concept-based explanations for convolutional neural networks (CNNs) aim ...
research
06/09/2022

ECLAD: Extracting Concepts with Local Aggregated Descriptors

Convolutional neural networks are being increasingly used in critical sy...
research
11/21/2017

Relating Input Concepts to Convolutional Neural Network Decisions

Many current methods to interpret convolutional neural networks (CNNs) u...
research
11/19/2022

Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models

This paper evaluates whether training a decision tree based on concepts ...
research
04/28/2023

Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability

Analysis of how semantic concepts are represented within Convolutional N...
research
04/09/2019

Regression Concept Vectors for Bidirectional Explanations in Histopathology

Explanations for deep neural network predictions in terms of domain-rela...

Please sign up or login with your details

Forgot password? Click here to reset