i-Algebra: Towards Interactive Interpretability of Deep Neural Networks

01/22/2021
by   Xinyang Zhang, et al.
25

Providing explanations for deep neural networks (DNNs) is essential for their use in domains wherein the interpretability of decisions is a critical prerequisite. Despite the plethora of work on interpreting DNNs, most existing solutions offer interpretability in an ad hoc, one-shot, and static manner, without accounting for the perception, understanding, or response of end-users, resulting in their poor usability in practice. In this paper, we argue that DNN interpretability should be implemented as the interactions between users and models. We present i-Algebra, a first-of-its-kind interactive framework for interpreting DNNs. At its core is a library of atomic, composable operators, which explain model behaviors at varying input granularity, during different inference stages, and from distinct interpretation perspectives. Leveraging a declarative query language, users are enabled to build various analysis tools (e.g., "drill-down", "comparative", "what-if" analysis) via flexibly composing such operators. We prototype i-Algebra and conduct user studies in a set of representative analysis tasks, including inspecting adversarial inputs, resolving model inconsistency, and cleansing contaminated data, all demonstrating its promising usability.

READ FULL TEXT

page 3

page 4

page 5

page 6

research
09/12/2019

New Perspective of Interpretability of Deep Neural Networks

Deep neural networks (DNNs) are known as black-box models. In other word...
research
12/03/2018

Interpretable Deep Learning under Fire

Providing explanations for complicated deep neural network (DNN) models ...
research
07/27/2022

Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks

The last decade of machine learning has seen drastic increases in scale ...
research
10/25/2017

Deep Neural Networks

Deep Neural Networks (DNNs) are universal function approximators providi...
research
03/30/2022

ConceptEvo: Interpreting Concept Evolution in Deep Learning Training

Deep neural networks (DNNs) have been widely used for decision making, p...
research
11/05/2021

Interpreting Representation Quality of DNNs for 3D Point Cloud Processing

In this paper, we evaluate the quality of knowledge representations enco...
research
10/22/2020

Towards falsifiable interpretability research

Methods for understanding the decisions of and mechanisms underlying dee...

Please sign up or login with your details

Forgot password? Click here to reset