ConceptExplainer: Understanding the Mental Model of Deep Learning Algorithms via Interactive Concept-based Explanations

04/04/2022
by   Jinbin Huang, et al.
0

Traditional deep learning interpretability methods which are suitable for non-expert users cannot explain network behaviors at the global level and are inflexible at providing fine-grained explanations. As a solution, concept-based explanations are gaining attention due to their human intuitiveness and their flexibility to describe both global and local model behaviors. Concepts are groups of similarly meaningful pixels that express a notion, embedded within the network's latent space and have primarily been hand-generated, but have recently been discovered by automated approaches. Unfortunately, the magnitude and diversity of discovered concepts makes it difficult for non-experts to navigate and make sense of the concept space, and lack of easy-to-use software also makes concept explanations inaccessible to many non-expert users. Visual analytics can serve a valuable role in bridging these gaps by enabling structured navigation and exploration of the concept space to provide concept-based insights of model behavior to users. To this end, we design, develop, and validate ConceptExplainer, a visual analytics system that enables non-expert users to interactively probe and explore the concept space to explain model behavior at the instance/class/global level. The system was developed via iterative prototyping to address a number of design challenges that non-experts face in interpreting the behavior of deep learning models. Via a rigorous user study, we validate how ConceptExplainer supports these challenges. Likewise, we conduct a series of usage scenarios to demonstrate how the system supports the interactive analysis of model behavior across a variety of tasks and explanation granularities, such as identifying concepts that are important to classification, identifying bias in training data, and understanding how concepts can be shared across diverse and seemingly dissimilar classes.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
02/07/2019

Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks

Interpretability has become an important topic of research as more machi...
research
02/09/2022

Discovering Concepts in Learned Representations using Statistical Inference and Interactive Visualization

Concept discovery is one of the open problems in the interpretability li...
research
11/25/2020

Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations

Most explanation methods in deep learning map importance estimates for a...
research
07/20/2022

Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

Concept-based interpretability methods aim to explain deep neural networ...
research
04/25/2022

Generating and Visualizing Trace Link Explanations

Recent breakthroughs in deep-learning (DL) approaches have resulted in t...
research
04/04/2023

VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users

The past decade has witnessed rapid progress in AI research since the br...
research
03/11/2013

Visualizing and Interacting with Concept Hierarchies

Concept Hierarchies and Formal Concept Analysis are theoretically well g...

Please sign up or login with your details

Forgot password? Click here to reset